Jan 24 00:03:35 crc systemd[1]: Starting Kubernetes Kubelet... Jan 24 00:03:35 crc restorecon[4578]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 24 00:03:35 crc restorecon[4578]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 24 00:03:36 crc kubenswrapper[4676]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:03:36 crc kubenswrapper[4676]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 24 00:03:36 crc kubenswrapper[4676]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:03:36 crc kubenswrapper[4676]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:03:36 crc kubenswrapper[4676]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 24 00:03:36 crc kubenswrapper[4676]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.092675 4676 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098210 4676 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098269 4676 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098282 4676 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098298 4676 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098310 4676 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098320 4676 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098328 4676 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098336 4676 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098344 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098352 4676 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098360 4676 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098368 4676 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098402 4676 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098410 4676 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098418 4676 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098427 4676 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098435 4676 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098443 4676 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098451 4676 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098460 4676 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098471 4676 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098483 4676 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098492 4676 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098501 4676 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098509 4676 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098518 4676 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098527 4676 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098535 4676 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098543 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098550 4676 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098558 4676 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098567 4676 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098574 4676 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098582 4676 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098591 4676 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098604 4676 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098613 4676 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098621 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098630 4676 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098638 4676 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098646 4676 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098655 4676 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098664 4676 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098672 4676 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098680 4676 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098690 4676 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098697 4676 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098706 4676 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098715 4676 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098723 4676 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098732 4676 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098740 4676 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098748 4676 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098756 4676 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098764 4676 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098771 4676 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098779 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098788 4676 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098798 4676 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098808 4676 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098816 4676 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098825 4676 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098836 4676 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098844 4676 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098852 4676 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098860 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098868 4676 feature_gate.go:330] unrecognized feature gate: Example Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098875 4676 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098883 4676 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098891 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.098899 4676 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099064 4676 flags.go:64] FLAG: --address="0.0.0.0" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099083 4676 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099100 4676 flags.go:64] FLAG: --anonymous-auth="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099112 4676 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099127 4676 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099136 4676 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099148 4676 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099160 4676 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099170 4676 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099179 4676 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099189 4676 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099199 4676 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099209 4676 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099218 4676 flags.go:64] FLAG: --cgroup-root="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099227 4676 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099236 4676 flags.go:64] FLAG: --client-ca-file="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099244 4676 flags.go:64] FLAG: --cloud-config="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099253 4676 flags.go:64] FLAG: --cloud-provider="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099262 4676 flags.go:64] FLAG: --cluster-dns="[]" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099273 4676 flags.go:64] FLAG: --cluster-domain="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099282 4676 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099291 4676 flags.go:64] FLAG: --config-dir="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099300 4676 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099310 4676 flags.go:64] FLAG: --container-log-max-files="5" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099321 4676 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099330 4676 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099340 4676 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099350 4676 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099359 4676 flags.go:64] FLAG: --contention-profiling="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099368 4676 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099408 4676 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099417 4676 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099428 4676 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099440 4676 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099449 4676 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099458 4676 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099467 4676 flags.go:64] FLAG: --enable-load-reader="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099477 4676 flags.go:64] FLAG: --enable-server="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099487 4676 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099499 4676 flags.go:64] FLAG: --event-burst="100" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099509 4676 flags.go:64] FLAG: --event-qps="50" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099519 4676 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099528 4676 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099537 4676 flags.go:64] FLAG: --eviction-hard="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099549 4676 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099558 4676 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099568 4676 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099577 4676 flags.go:64] FLAG: --eviction-soft="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099586 4676 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099595 4676 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099605 4676 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099613 4676 flags.go:64] FLAG: --experimental-mounter-path="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099622 4676 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099631 4676 flags.go:64] FLAG: --fail-swap-on="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099640 4676 flags.go:64] FLAG: --feature-gates="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099651 4676 flags.go:64] FLAG: --file-check-frequency="20s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099661 4676 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099670 4676 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099680 4676 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099689 4676 flags.go:64] FLAG: --healthz-port="10248" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099698 4676 flags.go:64] FLAG: --help="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099708 4676 flags.go:64] FLAG: --hostname-override="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099716 4676 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099726 4676 flags.go:64] FLAG: --http-check-frequency="20s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099735 4676 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099744 4676 flags.go:64] FLAG: --image-credential-provider-config="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099753 4676 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099763 4676 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099772 4676 flags.go:64] FLAG: --image-service-endpoint="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099782 4676 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099791 4676 flags.go:64] FLAG: --kube-api-burst="100" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099800 4676 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099811 4676 flags.go:64] FLAG: --kube-api-qps="50" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099820 4676 flags.go:64] FLAG: --kube-reserved="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099829 4676 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099838 4676 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099848 4676 flags.go:64] FLAG: --kubelet-cgroups="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099857 4676 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099866 4676 flags.go:64] FLAG: --lock-file="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099874 4676 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099883 4676 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099893 4676 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099907 4676 flags.go:64] FLAG: --log-json-split-stream="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099917 4676 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099926 4676 flags.go:64] FLAG: --log-text-split-stream="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099935 4676 flags.go:64] FLAG: --logging-format="text" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099944 4676 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099954 4676 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099964 4676 flags.go:64] FLAG: --manifest-url="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099973 4676 flags.go:64] FLAG: --manifest-url-header="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099984 4676 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.099994 4676 flags.go:64] FLAG: --max-open-files="1000000" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100006 4676 flags.go:64] FLAG: --max-pods="110" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100015 4676 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100025 4676 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100035 4676 flags.go:64] FLAG: --memory-manager-policy="None" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100044 4676 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100053 4676 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100063 4676 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100072 4676 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100091 4676 flags.go:64] FLAG: --node-status-max-images="50" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100100 4676 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100110 4676 flags.go:64] FLAG: --oom-score-adj="-999" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100119 4676 flags.go:64] FLAG: --pod-cidr="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100128 4676 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100141 4676 flags.go:64] FLAG: --pod-manifest-path="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100151 4676 flags.go:64] FLAG: --pod-max-pids="-1" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100160 4676 flags.go:64] FLAG: --pods-per-core="0" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100169 4676 flags.go:64] FLAG: --port="10250" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100178 4676 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100187 4676 flags.go:64] FLAG: --provider-id="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100196 4676 flags.go:64] FLAG: --qos-reserved="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100205 4676 flags.go:64] FLAG: --read-only-port="10255" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100214 4676 flags.go:64] FLAG: --register-node="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100223 4676 flags.go:64] FLAG: --register-schedulable="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100232 4676 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100247 4676 flags.go:64] FLAG: --registry-burst="10" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100256 4676 flags.go:64] FLAG: --registry-qps="5" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100265 4676 flags.go:64] FLAG: --reserved-cpus="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100298 4676 flags.go:64] FLAG: --reserved-memory="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100313 4676 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100324 4676 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100333 4676 flags.go:64] FLAG: --rotate-certificates="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100342 4676 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100351 4676 flags.go:64] FLAG: --runonce="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100361 4676 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100392 4676 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100402 4676 flags.go:64] FLAG: --seccomp-default="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100411 4676 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100420 4676 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100429 4676 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100439 4676 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100449 4676 flags.go:64] FLAG: --storage-driver-password="root" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100458 4676 flags.go:64] FLAG: --storage-driver-secure="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100466 4676 flags.go:64] FLAG: --storage-driver-table="stats" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100475 4676 flags.go:64] FLAG: --storage-driver-user="root" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100484 4676 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100494 4676 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100504 4676 flags.go:64] FLAG: --system-cgroups="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100512 4676 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100526 4676 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100536 4676 flags.go:64] FLAG: --tls-cert-file="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100545 4676 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100556 4676 flags.go:64] FLAG: --tls-min-version="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100566 4676 flags.go:64] FLAG: --tls-private-key-file="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100575 4676 flags.go:64] FLAG: --topology-manager-policy="none" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100584 4676 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100592 4676 flags.go:64] FLAG: --topology-manager-scope="container" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100602 4676 flags.go:64] FLAG: --v="2" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100614 4676 flags.go:64] FLAG: --version="false" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100631 4676 flags.go:64] FLAG: --vmodule="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100641 4676 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.100651 4676 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100873 4676 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100883 4676 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100894 4676 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100903 4676 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100911 4676 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100919 4676 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100927 4676 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100935 4676 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100942 4676 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100950 4676 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100958 4676 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100966 4676 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100973 4676 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100981 4676 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100988 4676 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.100999 4676 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101013 4676 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101022 4676 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101030 4676 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101038 4676 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101046 4676 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101055 4676 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101063 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101073 4676 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101083 4676 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101093 4676 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101102 4676 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101110 4676 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101119 4676 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101127 4676 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101136 4676 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101145 4676 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101153 4676 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101161 4676 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101170 4676 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101178 4676 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101186 4676 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101194 4676 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101203 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101211 4676 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101219 4676 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101228 4676 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101236 4676 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101245 4676 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101253 4676 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101268 4676 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101276 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101285 4676 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101296 4676 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101304 4676 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101313 4676 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101321 4676 feature_gate.go:330] unrecognized feature gate: Example Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101329 4676 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101337 4676 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101345 4676 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101354 4676 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101362 4676 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101370 4676 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101399 4676 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101411 4676 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101422 4676 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101431 4676 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101440 4676 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101451 4676 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101460 4676 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101469 4676 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101479 4676 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101487 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101496 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101505 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.101514 4676 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.101542 4676 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.113694 4676 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.113737 4676 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113891 4676 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113906 4676 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113916 4676 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113926 4676 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113935 4676 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113944 4676 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113953 4676 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113962 4676 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.113971 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114016 4676 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114025 4676 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114033 4676 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114042 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114050 4676 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114058 4676 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114066 4676 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114077 4676 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114087 4676 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114096 4676 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114106 4676 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114117 4676 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114128 4676 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114136 4676 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114145 4676 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114155 4676 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114163 4676 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114171 4676 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114179 4676 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114186 4676 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114197 4676 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114205 4676 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114213 4676 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114221 4676 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114230 4676 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114237 4676 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114245 4676 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114253 4676 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114261 4676 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114269 4676 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114277 4676 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114285 4676 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114292 4676 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114301 4676 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114309 4676 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114317 4676 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114325 4676 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114333 4676 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114342 4676 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114350 4676 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114358 4676 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114365 4676 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114402 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114418 4676 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114430 4676 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114439 4676 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114448 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114457 4676 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114466 4676 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114474 4676 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114482 4676 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114490 4676 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114498 4676 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114508 4676 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114517 4676 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114526 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114535 4676 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114544 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114553 4676 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114562 4676 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114570 4676 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114578 4676 feature_gate.go:330] unrecognized feature gate: Example Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.114591 4676 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114820 4676 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114831 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114841 4676 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114849 4676 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114856 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114864 4676 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114873 4676 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114881 4676 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114889 4676 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114897 4676 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114905 4676 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114913 4676 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114920 4676 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114929 4676 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114936 4676 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114944 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114953 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114960 4676 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114968 4676 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114976 4676 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114983 4676 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.114994 4676 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115003 4676 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115012 4676 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115021 4676 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115031 4676 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115041 4676 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115050 4676 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115058 4676 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115069 4676 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115077 4676 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115088 4676 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115098 4676 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115108 4676 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115118 4676 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115128 4676 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115138 4676 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115146 4676 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115155 4676 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115163 4676 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115170 4676 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115179 4676 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115187 4676 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115195 4676 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115203 4676 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115210 4676 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115218 4676 feature_gate.go:330] unrecognized feature gate: Example Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115226 4676 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115234 4676 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115242 4676 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115249 4676 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115257 4676 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115265 4676 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115273 4676 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115280 4676 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115288 4676 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115296 4676 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115304 4676 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115312 4676 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115320 4676 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115327 4676 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115335 4676 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115343 4676 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115351 4676 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115359 4676 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115367 4676 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115402 4676 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115413 4676 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115424 4676 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115433 4676 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.115442 4676 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.115453 4676 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.115654 4676 server.go:940] "Client rotation is on, will bootstrap in background" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.119824 4676 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.119954 4676 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.120682 4676 server.go:997] "Starting client certificate rotation" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.120729 4676 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.121243 4676 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-17 23:27:17.456478848 +0000 UTC Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.121432 4676 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.127468 4676 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.129211 4676 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.129885 4676 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.141910 4676 log.go:25] "Validated CRI v1 runtime API" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.161071 4676 log.go:25] "Validated CRI v1 image API" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.165495 4676 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.168012 4676 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-23-57-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.168045 4676 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.181028 4676 manager.go:217] Machine: {Timestamp:2026-01-24 00:03:36.180036994 +0000 UTC m=+0.210008035 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d7308ad2-105f-4282-b3b4-bf5b6bfb52ce BootID:55c3ff0e-ee2f-473a-9424-ac0aeb395b03 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b4:f5:9d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b4:f5:9d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:56:82:c2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:20:43:64 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e6:fc:fa Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:5f:4e:24 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1a:41:f3:46:c4:73 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:31:98:ef:07:09 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.181232 4676 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.181348 4676 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.181960 4676 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.182182 4676 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.182222 4676 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.182465 4676 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.182478 4676 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.182726 4676 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.182768 4676 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.183048 4676 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.183444 4676 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.184464 4676 kubelet.go:418] "Attempting to sync node with API server" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.184489 4676 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.184516 4676 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.184531 4676 kubelet.go:324] "Adding apiserver pod source" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.184547 4676 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.186892 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.186904 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.187070 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.187184 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.187830 4676 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.188613 4676 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.190025 4676 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.191075 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.191241 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.191353 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.191516 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.191651 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.191779 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.191888 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.191998 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.192106 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.192212 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.192332 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.192480 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.192831 4676 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.193739 4676 server.go:1280] "Started kubelet" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.194747 4676 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.194796 4676 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.195759 4676 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.195913 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.195950 4676 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:03:36 crc systemd[1]: Started Kubernetes Kubelet. Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.195797 4676 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d81dd5238c18d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 00:03:36.193679757 +0000 UTC m=+0.223650798,LastTimestamp:2026-01-24 00:03:36.193679757 +0000 UTC m=+0.223650798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.196503 4676 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.196546 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:23:35.329074848 +0000 UTC Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.198370 4676 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.198418 4676 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.198570 4676 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.199777 4676 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.201157 4676 server.go:460] "Adding debug handlers to kubelet server" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.201737 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="200ms" Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.201889 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.201980 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.208074 4676 factory.go:55] Registering systemd factory Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.208105 4676 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.213405 4676 factory.go:153] Registering CRI-O factory Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.213497 4676 factory.go:221] Registration of the crio container factory successfully Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.213634 4676 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.213708 4676 factory.go:103] Registering Raw factory Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.213772 4676 manager.go:1196] Started watching for new ooms in manager Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.214505 4676 manager.go:319] Starting recovery of all containers Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216532 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216616 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216632 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216653 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216669 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216683 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216698 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216713 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216731 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216744 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216758 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216772 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216786 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216804 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216878 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216894 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216910 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216924 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216938 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.216989 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217004 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217094 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217108 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217122 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217138 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217152 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217168 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217184 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217197 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217210 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217222 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217237 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217254 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217285 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217323 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217336 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217350 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217364 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217395 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217410 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217423 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217435 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217449 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217465 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217479 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217492 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217506 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217520 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217534 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217548 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217560 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217576 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217596 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217611 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217626 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217641 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217655 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217669 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217683 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217697 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217710 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217722 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217734 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217746 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217759 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217775 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217792 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217804 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217819 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217832 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217846 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217860 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217874 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217917 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217932 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217946 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217958 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217970 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217984 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.217997 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218008 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218020 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218032 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218046 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218059 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218073 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218086 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218099 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218111 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218125 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218137 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218151 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218164 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218178 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218194 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218207 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218222 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218235 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218249 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218262 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218276 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218289 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218303 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218317 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218339 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218352 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218366 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218396 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218410 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218424 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218443 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218462 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218480 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218499 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218516 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218530 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218546 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218564 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218581 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218599 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218616 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218631 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218647 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218664 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218682 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218697 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218713 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218726 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218741 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218759 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218773 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218786 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218800 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218812 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218825 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218839 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218849 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218860 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218874 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218887 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218903 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218921 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218936 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218948 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218961 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218974 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218986 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.218999 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219011 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219022 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219034 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219049 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219063 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219076 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219089 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219101 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219113 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219126 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219138 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219151 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219163 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219178 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219192 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219205 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219218 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219229 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219242 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219254 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219266 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219279 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219294 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219305 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219316 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219330 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219342 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219358 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219399 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219414 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219428 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219441 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219452 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219466 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219481 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219493 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219504 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219515 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219530 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219542 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219553 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219565 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219577 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219590 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219601 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219612 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219623 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219635 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219646 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219659 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.219671 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220354 4676 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220422 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220440 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220454 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220466 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220478 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220491 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220504 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220516 4676 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220529 4676 reconstruct.go:97] "Volume reconstruction finished" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.220538 4676 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.233610 4676 manager.go:324] Recovery completed Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.246287 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.249825 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.249900 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.249916 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.251406 4676 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.251426 4676 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.251447 4676 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.251934 4676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.254359 4676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.254419 4676 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.254449 4676 kubelet.go:2335] "Starting kubelet main sync loop" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.254505 4676 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.257367 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.257460 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.258283 4676 policy_none.go:49] "None policy: Start" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.259316 4676 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.259346 4676 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.300626 4676 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.335632 4676 manager.go:334] "Starting Device Plugin manager" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.335694 4676 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.335707 4676 server.go:79] "Starting device plugin registration server" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.336047 4676 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.336059 4676 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.336225 4676 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.336329 4676 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.336338 4676 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.343964 4676 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.355165 4676 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.355282 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.357531 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.357569 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.357582 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.357721 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.358067 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.358124 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.358549 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.358576 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.358586 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.358716 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.358857 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.358892 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359119 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359138 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359147 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359429 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359466 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359481 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359652 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359775 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.359811 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.361900 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.361921 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.361931 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362038 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362420 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362450 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362461 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362488 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362501 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362464 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362620 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.362632 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.363166 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.363172 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.363182 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.363191 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.363193 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.363203 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.363358 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.363403 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.365558 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.365798 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.365813 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.403220 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="400ms" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.435897 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.435976 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436016 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436056 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436103 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436135 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436163 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436175 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436192 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436269 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436298 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436342 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436363 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436422 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436438 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.436453 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.437233 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.437280 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.437297 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.437354 4676 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.438021 4676 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.537855 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.537922 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.537958 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538003 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538054 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538096 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538114 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538164 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538131 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538062 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538246 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538317 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538365 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538456 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538504 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538556 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538603 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538651 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538715 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538765 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539006 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539045 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539070 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539095 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539122 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539155 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.538254 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539186 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539320 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.539043 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.638541 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.639835 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.639865 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.639877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.639904 4676 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.640436 4676 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.686753 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.691714 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.715570 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.723104 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: I0124 00:03:36.726101 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.727706 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f24e602455ae47e0e1ffc764cbbcff4b30c23459b565b05eefcfd54d04fcced0 WatchSource:0}: Error finding container f24e602455ae47e0e1ffc764cbbcff4b30c23459b565b05eefcfd54d04fcced0: Status 404 returned error can't find the container with id f24e602455ae47e0e1ffc764cbbcff4b30c23459b565b05eefcfd54d04fcced0 Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.737830 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-082fa87031f3e492825d592a55e787df5a6a7215f03f643e101723c64b2d3904 WatchSource:0}: Error finding container 082fa87031f3e492825d592a55e787df5a6a7215f03f643e101723c64b2d3904: Status 404 returned error can't find the container with id 082fa87031f3e492825d592a55e787df5a6a7215f03f643e101723c64b2d3904 Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.753924 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f55c396308501ceb0a7e9d2dbd52f1707ac9705246d7da8a776245c931632ac9 WatchSource:0}: Error finding container f55c396308501ceb0a7e9d2dbd52f1707ac9705246d7da8a776245c931632ac9: Status 404 returned error can't find the container with id f55c396308501ceb0a7e9d2dbd52f1707ac9705246d7da8a776245c931632ac9 Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.779500 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e111bdb4550b256eeb8c2d41ee7acd41929095d3a13967e5089c3de6fbbb3fa6 WatchSource:0}: Error finding container e111bdb4550b256eeb8c2d41ee7acd41929095d3a13967e5089c3de6fbbb3fa6: Status 404 returned error can't find the container with id e111bdb4550b256eeb8c2d41ee7acd41929095d3a13967e5089c3de6fbbb3fa6 Jan 24 00:03:36 crc kubenswrapper[4676]: W0124 00:03:36.780558 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e5902e1964cc786ee2e765fbc04f7acf4c4bc4aed95c8aa29cec065574f0c93a WatchSource:0}: Error finding container e5902e1964cc786ee2e765fbc04f7acf4c4bc4aed95c8aa29cec065574f0c93a: Status 404 returned error can't find the container with id e5902e1964cc786ee2e765fbc04f7acf4c4bc4aed95c8aa29cec065574f0c93a Jan 24 00:03:36 crc kubenswrapper[4676]: E0124 00:03:36.805177 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="800ms" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.041261 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.042620 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.042657 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.042668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.042691 4676 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 00:03:37 crc kubenswrapper[4676]: E0124 00:03:37.043071 4676 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 24 00:03:37 crc kubenswrapper[4676]: W0124 00:03:37.065830 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:37 crc kubenswrapper[4676]: E0124 00:03:37.065888 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:37 crc kubenswrapper[4676]: W0124 00:03:37.168143 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:37 crc kubenswrapper[4676]: E0124 00:03:37.168511 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:37 crc kubenswrapper[4676]: W0124 00:03:37.174316 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:37 crc kubenswrapper[4676]: E0124 00:03:37.174450 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.195367 4676 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.197426 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:38:03.408117696 +0000 UTC Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.258460 4676 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c" exitCode=0 Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.258513 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.258576 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e111bdb4550b256eeb8c2d41ee7acd41929095d3a13967e5089c3de6fbbb3fa6"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.258740 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.260689 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.260755 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.260793 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.261422 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.261472 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f24e602455ae47e0e1ffc764cbbcff4b30c23459b565b05eefcfd54d04fcced0"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.263308 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.263875 4676 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c8e40d7ba93858915cd80a18bfee202c1e6b2672cd41eff2441d6d5178d98e1d" exitCode=0 Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.263960 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c8e40d7ba93858915cd80a18bfee202c1e6b2672cd41eff2441d6d5178d98e1d"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.263991 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"082fa87031f3e492825d592a55e787df5a6a7215f03f643e101723c64b2d3904"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.264063 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.264513 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.264539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.264550 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.265215 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.265247 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.265260 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.269818 4676 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51" exitCode=0 Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.269893 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.269915 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e5902e1964cc786ee2e765fbc04f7acf4c4bc4aed95c8aa29cec065574f0c93a"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.270003 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.272924 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.273049 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.273241 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.273757 4676 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0" exitCode=0 Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.273812 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.273844 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f55c396308501ceb0a7e9d2dbd52f1707ac9705246d7da8a776245c931632ac9"} Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.273991 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.275058 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.275099 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.275116 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:37 crc kubenswrapper[4676]: W0124 00:03:37.409921 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 24 00:03:37 crc kubenswrapper[4676]: E0124 00:03:37.409992 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:03:37 crc kubenswrapper[4676]: E0124 00:03:37.606212 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="1.6s" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.843820 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.846717 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.846780 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.846791 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:37 crc kubenswrapper[4676]: I0124 00:03:37.846814 4676 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 00:03:37 crc kubenswrapper[4676]: E0124 00:03:37.848506 4676 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.197654 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:23:05.46788114 +0000 UTC Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.249945 4676 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.277943 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.277996 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.278013 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.278120 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.278905 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.278951 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.278965 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.279669 4676 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb" exitCode=0 Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.279717 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.279829 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.280511 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.280539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.280550 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.282811 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.282843 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.282859 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.282874 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.287304 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.287333 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.287346 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.287428 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.288034 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.288055 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.288067 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.290545 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"04c7475f668593c7097dbf2dd1453baa25cff2333367eadc62a1124a240dfe05"} Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.290616 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.291256 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.291280 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.291290 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:38 crc kubenswrapper[4676]: I0124 00:03:38.654742 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.197996 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:55:41.063904656 +0000 UTC Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.297909 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc"} Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.298036 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.299726 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.299780 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.299802 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.301689 4676 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9" exitCode=0 Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.301797 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9"} Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.301852 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.302003 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.303440 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.303497 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.303518 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.303740 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.303795 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.303820 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.448728 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.449697 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.449758 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.449780 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.449818 4676 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.631632 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.637742 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:39 crc kubenswrapper[4676]: I0124 00:03:39.637895 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.060984 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.155861 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.199029 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:34:06.81826347 +0000 UTC Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307011 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b"} Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307061 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307070 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f"} Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307082 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d"} Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307090 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62"} Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307093 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307111 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307835 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307859 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307867 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307932 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307950 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.307959 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:40 crc kubenswrapper[4676]: I0124 00:03:40.809063 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.199488 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:42:22.678758778 +0000 UTC Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.316149 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753"} Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.316224 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.316301 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.316410 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.316487 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317085 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317114 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317124 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317799 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317834 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317846 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317885 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317897 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:41 crc kubenswrapper[4676]: I0124 00:03:41.317904 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.200438 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:17:41.991610681 +0000 UTC Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.318324 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.318407 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.319572 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.319609 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.319622 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.319579 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.319727 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.319744 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.638182 4676 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:03:42 crc kubenswrapper[4676]: I0124 00:03:42.638404 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:03:43 crc kubenswrapper[4676]: I0124 00:03:43.200661 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:06:46.74401054 +0000 UTC Jan 24 00:03:43 crc kubenswrapper[4676]: I0124 00:03:43.752189 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 24 00:03:43 crc kubenswrapper[4676]: I0124 00:03:43.752462 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:43 crc kubenswrapper[4676]: I0124 00:03:43.754318 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:43 crc kubenswrapper[4676]: I0124 00:03:43.754435 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:43 crc kubenswrapper[4676]: I0124 00:03:43.754464 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:44 crc kubenswrapper[4676]: I0124 00:03:44.185468 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:03:44 crc kubenswrapper[4676]: I0124 00:03:44.185679 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:44 crc kubenswrapper[4676]: I0124 00:03:44.186908 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:44 crc kubenswrapper[4676]: I0124 00:03:44.186961 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:44 crc kubenswrapper[4676]: I0124 00:03:44.186980 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:44 crc kubenswrapper[4676]: I0124 00:03:44.201157 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:11:53.438106312 +0000 UTC Jan 24 00:03:45 crc kubenswrapper[4676]: I0124 00:03:45.202058 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:32:15.313251673 +0000 UTC Jan 24 00:03:45 crc kubenswrapper[4676]: I0124 00:03:45.968361 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:45 crc kubenswrapper[4676]: I0124 00:03:45.968629 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:45 crc kubenswrapper[4676]: I0124 00:03:45.970150 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:45 crc kubenswrapper[4676]: I0124 00:03:45.970229 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:45 crc kubenswrapper[4676]: I0124 00:03:45.970255 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:46 crc kubenswrapper[4676]: I0124 00:03:46.202445 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:04:49.081422039 +0000 UTC Jan 24 00:03:46 crc kubenswrapper[4676]: E0124 00:03:46.344082 4676 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 24 00:03:47 crc kubenswrapper[4676]: I0124 00:03:47.203146 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:02:27.302914341 +0000 UTC Jan 24 00:03:48 crc kubenswrapper[4676]: I0124 00:03:48.110278 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 24 00:03:48 crc kubenswrapper[4676]: I0124 00:03:48.110526 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:48 crc kubenswrapper[4676]: I0124 00:03:48.112147 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:48 crc kubenswrapper[4676]: I0124 00:03:48.112215 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:48 crc kubenswrapper[4676]: I0124 00:03:48.112239 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:48 crc kubenswrapper[4676]: I0124 00:03:48.196097 4676 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 24 00:03:48 crc kubenswrapper[4676]: I0124 00:03:48.203746 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:42:30.209890763 +0000 UTC Jan 24 00:03:48 crc kubenswrapper[4676]: E0124 00:03:48.252250 4676 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 24 00:03:48 crc kubenswrapper[4676]: W0124 00:03:48.926614 4676 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 24 00:03:48 crc kubenswrapper[4676]: I0124 00:03:48.926744 4676 trace.go:236] Trace[596862461]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 00:03:38.922) (total time: 10003ms): Jan 24 00:03:48 crc kubenswrapper[4676]: Trace[596862461]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (00:03:48.926) Jan 24 00:03:48 crc kubenswrapper[4676]: Trace[596862461]: [10.003988124s] [10.003988124s] END Jan 24 00:03:48 crc kubenswrapper[4676]: E0124 00:03:48.926765 4676 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 24 00:03:49 crc kubenswrapper[4676]: I0124 00:03:49.203884 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:11:18.852986366 +0000 UTC Jan 24 00:03:49 crc kubenswrapper[4676]: E0124 00:03:49.207152 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 24 00:03:49 crc kubenswrapper[4676]: I0124 00:03:49.345624 4676 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 24 00:03:49 crc kubenswrapper[4676]: I0124 00:03:49.345690 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 24 00:03:49 crc kubenswrapper[4676]: I0124 00:03:49.383865 4676 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 24 00:03:49 crc kubenswrapper[4676]: I0124 00:03:49.383929 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 24 00:03:50 crc kubenswrapper[4676]: I0124 00:03:50.165691 4676 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]log ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]etcd ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/generic-apiserver-start-informers ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/priority-and-fairness-filter ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-apiextensions-informers ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-apiextensions-controllers ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/crd-informer-synced ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-system-namespaces-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 24 00:03:50 crc kubenswrapper[4676]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 24 00:03:50 crc kubenswrapper[4676]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/bootstrap-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/start-kube-aggregator-informers ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/apiservice-registration-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/apiservice-discovery-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]autoregister-completion ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/apiservice-openapi-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 24 00:03:50 crc kubenswrapper[4676]: livez check failed Jan 24 00:03:50 crc kubenswrapper[4676]: I0124 00:03:50.165949 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:03:50 crc kubenswrapper[4676]: I0124 00:03:50.204055 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:24:21.691594889 +0000 UTC Jan 24 00:03:51 crc kubenswrapper[4676]: I0124 00:03:51.204417 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:24:35.781659513 +0000 UTC Jan 24 00:03:52 crc kubenswrapper[4676]: I0124 00:03:52.205074 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:32:53.721199841 +0000 UTC Jan 24 00:03:52 crc kubenswrapper[4676]: I0124 00:03:52.305586 4676 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 24 00:03:52 crc kubenswrapper[4676]: I0124 00:03:52.327896 4676 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 00:03:52 crc kubenswrapper[4676]: I0124 00:03:52.638502 4676 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:03:52 crc kubenswrapper[4676]: I0124 00:03:52.638586 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:03:53 crc kubenswrapper[4676]: I0124 00:03:53.205978 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:07:24.670928747 +0000 UTC Jan 24 00:03:53 crc kubenswrapper[4676]: I0124 00:03:53.866065 4676 csr.go:261] certificate signing request csr-swjz9 is approved, waiting to be issued Jan 24 00:03:53 crc kubenswrapper[4676]: I0124 00:03:53.878338 4676 csr.go:257] certificate signing request csr-swjz9 is issued Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.206798 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:17:39.179907134 +0000 UTC Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.339177 4676 trace.go:236] Trace[660571960]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 00:03:40.159) (total time: 14179ms): Jan 24 00:03:54 crc kubenswrapper[4676]: Trace[660571960]: ---"Objects listed" error: 14179ms (00:03:54.339) Jan 24 00:03:54 crc kubenswrapper[4676]: Trace[660571960]: [14.179800913s] [14.179800913s] END Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.339210 4676 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.339544 4676 trace.go:236] Trace[791521244]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 00:03:39.635) (total time: 14703ms): Jan 24 00:03:54 crc kubenswrapper[4676]: Trace[791521244]: ---"Objects listed" error: 14703ms (00:03:54.339) Jan 24 00:03:54 crc kubenswrapper[4676]: Trace[791521244]: [14.703645735s] [14.703645735s] END Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.339557 4676 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.340464 4676 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.348726 4676 trace.go:236] Trace[613796798]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 00:03:39.353) (total time: 14995ms): Jan 24 00:03:54 crc kubenswrapper[4676]: Trace[613796798]: ---"Objects listed" error: 14995ms (00:03:54.348) Jan 24 00:03:54 crc kubenswrapper[4676]: Trace[613796798]: [14.995151288s] [14.995151288s] END Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.348753 4676 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 00:03:54 crc kubenswrapper[4676]: E0124 00:03:54.349778 4676 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.421322 4676 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58964->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.421323 4676 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58968->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.421367 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58964->192.168.126.11:17697: read: connection reset by peer" Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.421408 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58968->192.168.126.11:17697: read: connection reset by peer" Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.586133 4676 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.879450 4676 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 23:58:53 +0000 UTC, rotation deadline is 2026-11-30 21:36:51.557210319 +0000 UTC Jan 24 00:03:54 crc kubenswrapper[4676]: I0124 00:03:54.879496 4676 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7461h32m56.677716848s for next certificate rotation Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.161120 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.161656 4676 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.161722 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.167994 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.196931 4676 apiserver.go:52] "Watching apiserver" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.200258 4676 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.200607 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.200957 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.201196 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.201348 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.201424 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.201924 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.202064 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.202130 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.202334 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.202485 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.203772 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.203913 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.204064 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.205338 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.205416 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.205427 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.205478 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.205938 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.209241 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:13:46.822899023 +0000 UTC Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.209429 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.226194 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.239144 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.253663 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.267496 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.280610 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.299911 4676 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.321438 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.335863 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346677 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346719 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346745 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346765 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346784 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346798 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346818 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346837 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346857 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346875 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346891 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346911 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346933 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346955 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346973 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.346987 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347002 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347020 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347039 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347055 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347072 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347089 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347107 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347122 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347137 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347152 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347169 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347194 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347211 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347227 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347247 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347269 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347291 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347316 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347345 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347389 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347414 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347436 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347458 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347479 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347510 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347538 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347560 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347577 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347591 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347608 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347628 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347643 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347659 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347675 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347689 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347706 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347722 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347738 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347753 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347767 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347782 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347796 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347822 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347854 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347870 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347886 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347901 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347917 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347930 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347947 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347962 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347978 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.347997 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348012 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348026 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348040 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348054 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348069 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348085 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348101 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348115 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348131 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348147 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348161 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348176 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348191 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348206 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348224 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348246 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348270 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348294 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348291 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348315 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348332 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348347 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348363 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348395 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348411 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348426 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348443 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348457 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348471 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348487 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348503 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348521 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348537 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348552 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348567 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348584 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348606 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348628 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348646 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348663 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348679 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348695 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348711 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348728 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348746 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348781 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348796 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348812 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348853 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348870 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348886 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348903 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348920 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348936 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348951 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348969 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.348992 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349015 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349033 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349049 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349065 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349080 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349101 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349116 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349132 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349149 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349164 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349180 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349197 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349218 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349234 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349250 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349267 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349283 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349298 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349315 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349338 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349360 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349389 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349481 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349501 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349518 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349534 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349550 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349567 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349584 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349601 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349619 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349635 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349653 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349674 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349709 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349733 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349756 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349780 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349804 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349825 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349848 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349874 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349897 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349919 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349942 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349963 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.349987 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350008 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350031 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350054 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350079 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350105 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350128 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350156 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350178 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350194 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350210 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350226 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350244 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350262 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350281 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350299 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350318 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350337 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350354 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350387 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350404 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350422 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350438 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350455 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350492 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350512 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350547 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350564 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350582 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350602 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350617 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350638 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350661 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350680 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350698 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350716 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350734 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350750 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.350781 4676 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.351250 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.351626 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.352312 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.352798 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.353055 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.353550 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.353799 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.353937 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.354134 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.354217 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.354501 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.354905 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355046 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355141 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355167 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355235 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355234 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355393 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355453 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355475 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355805 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355902 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355955 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.355963 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356101 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356145 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356250 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356309 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356423 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356573 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356661 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356691 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356858 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.356886 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357010 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357080 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357102 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357268 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357370 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357534 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357696 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357736 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.357754 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.360255 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.360534 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.360560 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.360657 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.360805 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.360803 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.360939 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.361069 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.361310 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.361429 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.361617 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.361796 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.361929 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362072 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362203 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362216 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362387 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362467 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362541 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362619 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362647 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362736 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362759 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362774 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.362953 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.363077 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.363168 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.363606 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.363974 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.364496 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.364714 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.364926 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.365120 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.365301 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.365570 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.366117 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.366195 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:03:55.866176269 +0000 UTC m=+19.896147270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.366591 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.366728 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.366806 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367097 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367139 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367233 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367420 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367483 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367573 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367575 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367748 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367921 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.367966 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.368186 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.368515 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.368708 4676 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.369088 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.369359 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.369606 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.370118 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.370297 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.370646 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.370773 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.371160 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.371541 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.371879 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.371941 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.371901 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.372643 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.372657 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.372744 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.372765 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.372795 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.372889 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.373002 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.373196 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.373614 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.377225 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.377327 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.377496 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.377657 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.377793 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.377964 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.378119 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.378270 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.379023 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.379266 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.379314 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.379485 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.379566 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.379798 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.379983 4676 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.380032 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.380059 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:55.880042962 +0000 UTC m=+19.910013963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.380125 4676 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.380160 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:55.880154335 +0000 UTC m=+19.910125336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.380778 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.380945 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.383428 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.385219 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.385520 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.385997 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.386348 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.386649 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.388699 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.389081 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.389292 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.389428 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.389721 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.389900 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.389908 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390154 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390169 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390449 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390481 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390510 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390529 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390678 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390717 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.390791 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.390807 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.390818 4676 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.390871 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:55.890852867 +0000 UTC m=+19.920823868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.390993 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.391003 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.391346 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.391441 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.391656 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.392538 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.394682 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.395084 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.395212 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.395227 4676 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.395298 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:55.895280826 +0000 UTC m=+19.925251827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.395352 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.396178 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.396412 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.396465 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.396553 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.396626 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.397040 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.397108 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.397472 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.397530 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.398520 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.398628 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.398932 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.399241 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.399406 4676 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc" exitCode=255 Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.399428 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc"} Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.400110 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.400589 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.404776 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.405652 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.405760 4676 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.405993 4676 scope.go:117] "RemoveContainer" containerID="5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.406017 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.408002 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.408011 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.408457 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.409691 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.409824 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.409980 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.410073 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.409660 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.410156 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.410207 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.410209 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.410266 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.410423 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.410489 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.411424 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.413175 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.413504 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.413765 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.414278 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.415134 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.417794 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.417970 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.427627 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.434995 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.441133 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.444211 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.448218 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453154 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453182 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453240 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453252 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453262 4676 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453270 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453279 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453290 4676 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453298 4676 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453306 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453315 4676 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453323 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453331 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453340 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453349 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453357 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453366 4676 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453389 4676 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453398 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453406 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453415 4676 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453423 4676 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453432 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453440 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453450 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453458 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453470 4676 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453482 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453493 4676 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453505 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453516 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453525 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453535 4676 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453546 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453558 4676 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453570 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453580 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453589 4676 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453597 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453606 4676 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453614 4676 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453622 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453630 4676 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453638 4676 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453645 4676 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453653 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453661 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453668 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453676 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453673 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453810 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453684 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453848 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453860 4676 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453871 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453880 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453889 4676 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453901 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453914 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453928 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453940 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453951 4676 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453960 4676 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.454155 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.453968 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.454756 4676 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.454773 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.454784 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.454795 4676 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.454807 4676 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.454818 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.455351 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.455388 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.455400 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.455411 4676 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.455422 4676 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.455433 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456160 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456176 4676 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456189 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456200 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456212 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456223 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456234 4676 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456245 4676 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456255 4676 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456267 4676 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456278 4676 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456290 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456302 4676 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456365 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456395 4676 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456408 4676 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456420 4676 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456432 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456444 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456459 4676 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456472 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456485 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456498 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456509 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456522 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456533 4676 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456545 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.456557 4676 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457014 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457027 4676 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457093 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457106 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457141 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457156 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457171 4676 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457182 4676 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457193 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457229 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457241 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457254 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457266 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457280 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457318 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457331 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457342 4676 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457352 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457396 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457412 4676 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457422 4676 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457435 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457481 4676 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457497 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457509 4676 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457519 4676 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457531 4676 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457542 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457556 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457568 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457581 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457591 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457602 4676 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457613 4676 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457624 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457635 4676 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457645 4676 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457656 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457668 4676 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457679 4676 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457690 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457702 4676 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457714 4676 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457728 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457740 4676 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457752 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457764 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457775 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457786 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457797 4676 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457808 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457820 4676 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457831 4676 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457842 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457854 4676 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457865 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457876 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457885 4676 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457893 4676 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457901 4676 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457909 4676 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457917 4676 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457925 4676 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457933 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457941 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457950 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457958 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457966 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457974 4676 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457983 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457992 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.457999 4676 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458007 4676 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458015 4676 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458023 4676 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458031 4676 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458039 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458047 4676 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458055 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458063 4676 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458071 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458079 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458087 4676 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458095 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458103 4676 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458111 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458119 4676 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458126 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458134 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458142 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458150 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458158 4676 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.458168 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.466158 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.476267 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.490965 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.526303 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.535463 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.542395 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 00:03:55 crc kubenswrapper[4676]: W0124 00:03:55.562211 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-9284ce85b5ad52e51c0ca7e0622038442ef0d0993981770cbcd7b91cc77b0dbd WatchSource:0}: Error finding container 9284ce85b5ad52e51c0ca7e0622038442ef0d0993981770cbcd7b91cc77b0dbd: Status 404 returned error can't find the container with id 9284ce85b5ad52e51c0ca7e0622038442ef0d0993981770cbcd7b91cc77b0dbd Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.569140 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5dg9q"] Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.569417 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5dg9q" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.575355 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.575582 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.578283 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.602126 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.655769 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.664632 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efe79b06-a59d-4d3c-9161-839d4e60fb52-hosts-file\") pod \"node-resolver-5dg9q\" (UID: \"efe79b06-a59d-4d3c-9161-839d4e60fb52\") " pod="openshift-dns/node-resolver-5dg9q" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.664664 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cht5r\" (UniqueName: \"kubernetes.io/projected/efe79b06-a59d-4d3c-9161-839d4e60fb52-kube-api-access-cht5r\") pod \"node-resolver-5dg9q\" (UID: \"efe79b06-a59d-4d3c-9161-839d4e60fb52\") " pod="openshift-dns/node-resolver-5dg9q" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.668357 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.689692 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.710993 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.724135 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.736025 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.765671 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efe79b06-a59d-4d3c-9161-839d4e60fb52-hosts-file\") pod \"node-resolver-5dg9q\" (UID: \"efe79b06-a59d-4d3c-9161-839d4e60fb52\") " pod="openshift-dns/node-resolver-5dg9q" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.765733 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cht5r\" (UniqueName: \"kubernetes.io/projected/efe79b06-a59d-4d3c-9161-839d4e60fb52-kube-api-access-cht5r\") pod \"node-resolver-5dg9q\" (UID: \"efe79b06-a59d-4d3c-9161-839d4e60fb52\") " pod="openshift-dns/node-resolver-5dg9q" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.765890 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efe79b06-a59d-4d3c-9161-839d4e60fb52-hosts-file\") pod \"node-resolver-5dg9q\" (UID: \"efe79b06-a59d-4d3c-9161-839d4e60fb52\") " pod="openshift-dns/node-resolver-5dg9q" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.774750 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.789206 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cht5r\" (UniqueName: \"kubernetes.io/projected/efe79b06-a59d-4d3c-9161-839d4e60fb52-kube-api-access-cht5r\") pod \"node-resolver-5dg9q\" (UID: \"efe79b06-a59d-4d3c-9161-839d4e60fb52\") " pod="openshift-dns/node-resolver-5dg9q" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.866366 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.866550 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:03:56.866532825 +0000 UTC m=+20.896503826 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.889218 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5dg9q" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.966848 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.966885 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.966904 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.966922 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967018 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967031 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967040 4676 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967075 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:56.967063729 +0000 UTC m=+20.997034730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967116 4676 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967144 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:56.96713893 +0000 UTC m=+20.997109931 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967168 4676 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967186 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:56.967180861 +0000 UTC m=+20.997151852 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967221 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967229 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967237 4676 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:55 crc kubenswrapper[4676]: E0124 00:03:55.967255 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:56.967249393 +0000 UTC m=+20.997220394 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.974552 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:55 crc kubenswrapper[4676]: I0124 00:03:55.991484 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.000432 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.027766 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.038079 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.057069 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.072814 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.078897 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.088960 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.100313 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.120976 4676 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121108 4676 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121130 4676 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121166 4676 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121193 4676 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121192 4676 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121216 4676 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121238 4676 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121246 4676 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121256 4676 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121294 4676 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121259 4676 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.121320 4676 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.121591 4676 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": read tcp 38.102.83.27:48472->38.102.83.27:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-apiserver-crc.188d81e0930901c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 00:03:50.165971396 +0000 UTC m=+14.195942397,LastTimestamp:2026-01-24 00:03:50.165971396 +0000 UTC m=+14.195942397,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.210144 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:53:49.465445343 +0000 UTC Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.255269 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.255396 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.258630 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.259122 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.259850 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.260985 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.261563 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.262035 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.262947 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.263476 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.264431 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.264890 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.265747 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.266340 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.267167 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.267696 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.268257 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.269081 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.269588 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.270309 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.270838 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.271340 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.272124 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.272730 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.273113 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.274037 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.274527 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.275454 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.276010 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.278069 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.278114 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.278636 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.279481 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.279911 4676 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.280006 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.282756 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.283227 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.283725 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.285146 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.286086 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.286564 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.287511 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.288109 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.288892 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.289441 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.290363 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.291293 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.292055 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.293284 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.293405 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.293947 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.294776 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.295218 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.295706 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.296143 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.296651 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.297209 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.297668 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.309280 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.321446 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.339172 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.342469 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-x57xf"] Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.342716 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.344000 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ld569"] Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.344571 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.347232 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ppmcr"] Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.347477 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.347734 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.347903 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.351111 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.351299 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7mzrz"] Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.351713 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.352205 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.352395 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.352535 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.359945 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.360046 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.360220 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.360306 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.360336 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.360451 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.360455 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.371942 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.381163 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.381799 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.381921 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.382199 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.382681 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.382726 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.412947 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.418299 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.418536 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.420490 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.420857 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a2ca09c6467fd8af4e726359910c882217012224cf8db9e93fe056f2f8f04336"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.424616 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.424643 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.424652 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c11a4d1f7d9392b187f98967c34b669b246804eb46906bd61cfc2be965a4c421"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.426201 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5dg9q" event={"ID":"efe79b06-a59d-4d3c-9161-839d4e60fb52","Type":"ContainerStarted","Data":"53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.426242 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5dg9q" event={"ID":"efe79b06-a59d-4d3c-9161-839d4e60fb52","Type":"ContainerStarted","Data":"1116159e810236268e73eac6cd4b12bd9b570d8e39f251e02cb4ad8117e01696"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.428034 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.428056 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9284ce85b5ad52e51c0ca7e0622038442ef0d0993981770cbcd7b91cc77b0dbd"} Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.435555 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.450740 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.468586 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471296 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-cni-binary-copy\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471346 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-socket-dir-parent\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471364 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bd647b0d-6d3d-432d-81ac-6484a2948211-rootfs\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471396 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-kubelet\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471411 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-cni-multus\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471425 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-kubelet\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471440 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vrg\" (UniqueName: \"kubernetes.io/projected/bd647b0d-6d3d-432d-81ac-6484a2948211-kube-api-access-w9vrg\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471456 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-system-cni-dir\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471470 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471485 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-etc-kubernetes\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471500 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovn-node-metrics-cert\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471571 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-system-cni-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471669 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471727 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-ovn\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471745 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-script-lib\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471777 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd647b0d-6d3d-432d-81ac-6484a2948211-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471793 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471810 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-netns\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471910 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-daemon-config\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471945 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-netns\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471963 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-log-socket\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471976 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-bin\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.471989 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frqp\" (UniqueName: \"kubernetes.io/projected/24f0dc26-0857-430f-aebd-073fcfcc1c0a-kube-api-access-4frqp\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472019 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-var-lib-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472033 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-k8s-cni-cncf-io\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472049 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cni-binary-copy\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472071 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472105 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-cni-bin\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472121 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-hostroot\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472136 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-multus-certs\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472151 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-slash\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472165 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-config\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472209 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd647b0d-6d3d-432d-81ac-6484a2948211-proxy-tls\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472225 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhp4b\" (UniqueName: \"kubernetes.io/projected/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-kube-api-access-rhp4b\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472240 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-os-release\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472283 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bbh\" (UniqueName: \"kubernetes.io/projected/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-kube-api-access-67bbh\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472313 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-systemd\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472348 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-os-release\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472363 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-node-log\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472405 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-env-overrides\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472420 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-cni-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472434 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-cnibin\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472479 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-conf-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472499 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472513 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-etc-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472548 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cnibin\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472561 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-systemd-units\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.472575 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-netd\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.479944 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.488908 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.499227 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.511036 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.520939 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.535339 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.546013 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591307 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd647b0d-6d3d-432d-81ac-6484a2948211-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591346 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591369 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-netns\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591418 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-daemon-config\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591434 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-netns\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591452 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-log-socket\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591470 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-bin\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591485 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4frqp\" (UniqueName: \"kubernetes.io/projected/24f0dc26-0857-430f-aebd-073fcfcc1c0a-kube-api-access-4frqp\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591506 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-var-lib-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591525 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-k8s-cni-cncf-io\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591541 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cni-binary-copy\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591558 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591593 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-cni-bin\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591608 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-hostroot\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591624 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-multus-certs\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591712 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-slash\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591739 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-config\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591755 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd647b0d-6d3d-432d-81ac-6484a2948211-proxy-tls\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591770 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhp4b\" (UniqueName: \"kubernetes.io/projected/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-kube-api-access-rhp4b\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591784 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-os-release\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591799 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bbh\" (UniqueName: \"kubernetes.io/projected/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-kube-api-access-67bbh\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591814 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-systemd\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591829 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-os-release\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591853 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-node-log\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591866 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-env-overrides\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591882 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-cni-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591896 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-cnibin\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591909 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-conf-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591923 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591937 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-etc-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591953 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cnibin\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591967 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-systemd-units\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.591984 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-netd\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592019 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-cni-binary-copy\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592034 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-socket-dir-parent\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592048 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bd647b0d-6d3d-432d-81ac-6484a2948211-rootfs\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592062 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-kubelet\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592129 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-cni-multus\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592149 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-kubelet\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592288 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9vrg\" (UniqueName: \"kubernetes.io/projected/bd647b0d-6d3d-432d-81ac-6484a2948211-kube-api-access-w9vrg\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592329 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-system-cni-dir\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592345 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592361 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-etc-kubernetes\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592391 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovn-node-metrics-cert\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592417 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-system-cni-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592432 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592457 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-ovn\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592472 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-script-lib\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592661 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.592883 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-node-log\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593333 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-script-lib\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593421 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-kubelet\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593444 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-env-overrides\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593473 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-cni-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593469 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd647b0d-6d3d-432d-81ac-6484a2948211-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593497 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593512 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-cnibin\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593527 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-netns\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593538 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-conf-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593730 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-cni-multus\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593794 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-kubelet\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593972 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-daemon-config\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.593991 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594016 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-hostroot\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594027 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-etc-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594025 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-var-lib-cni-bin\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594054 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-netns\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594085 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-systemd-units\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594089 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cnibin\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594073 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-system-cni-dir\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594111 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-netd\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594120 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-ovn-kubernetes\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594135 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-log-socket\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594161 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-bin\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594151 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-etc-kubernetes\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594458 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-var-lib-openvswitch\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594506 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-k8s-cni-cncf-io\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594790 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-config\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594821 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-cni-binary-copy\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594865 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-multus-socket-dir-parent\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594892 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bd647b0d-6d3d-432d-81ac-6484a2948211-rootfs\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594042 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-host-run-multus-certs\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.595018 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-cni-binary-copy\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.595149 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.594063 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-slash\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.595402 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-os-release\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.595421 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-system-cni-dir\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.595436 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-systemd\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.595467 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-ovn\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.595507 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-os-release\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.597630 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.598293 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovn-node-metrics-cert\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.598846 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd647b0d-6d3d-432d-81ac-6484a2948211-proxy-tls\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.608319 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.612441 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bbh\" (UniqueName: \"kubernetes.io/projected/b88e9d2e-35da-45a8-ac7e-22afd660ff9f-kube-api-access-67bbh\") pod \"multus-x57xf\" (UID: \"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\") " pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.613339 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9vrg\" (UniqueName: \"kubernetes.io/projected/bd647b0d-6d3d-432d-81ac-6484a2948211-kube-api-access-w9vrg\") pod \"machine-config-daemon-7mzrz\" (UID: \"bd647b0d-6d3d-432d-81ac-6484a2948211\") " pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.614633 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frqp\" (UniqueName: \"kubernetes.io/projected/24f0dc26-0857-430f-aebd-073fcfcc1c0a-kube-api-access-4frqp\") pod \"ovnkube-node-ld569\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.615868 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhp4b\" (UniqueName: \"kubernetes.io/projected/79ad333b-cf18-4ba3-b9d4-2f89c7c44354-kube-api-access-rhp4b\") pod \"multus-additional-cni-plugins-ppmcr\" (UID: \"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\") " pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.624178 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.635057 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.651295 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.671440 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-x57xf" Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.682413 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88e9d2e_35da_45a8_ac7e_22afd660ff9f.slice/crio-8f476b3316aeee36dfa281df4104b2636e890e125d49340c30276eddbcd20c67 WatchSource:0}: Error finding container 8f476b3316aeee36dfa281df4104b2636e890e125d49340c30276eddbcd20c67: Status 404 returned error can't find the container with id 8f476b3316aeee36dfa281df4104b2636e890e125d49340c30276eddbcd20c67 Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.682800 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.703272 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.714468 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.742130 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ad333b_cf18_4ba3_b9d4_2f89c7c44354.slice/crio-5cd2eacc7418e20bee23791d1168847e95ffa559aa7a9f4b13fabb531e7b4d1a WatchSource:0}: Error finding container 5cd2eacc7418e20bee23791d1168847e95ffa559aa7a9f4b13fabb531e7b4d1a: Status 404 returned error can't find the container with id 5cd2eacc7418e20bee23791d1168847e95ffa559aa7a9f4b13fabb531e7b4d1a Jan 24 00:03:56 crc kubenswrapper[4676]: W0124 00:03:56.765084 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd647b0d_6d3d_432d_81ac_6484a2948211.slice/crio-6792889cfb51332982504d10488c5aa6324a4ef991ad23646663e4d7cb7468bf WatchSource:0}: Error finding container 6792889cfb51332982504d10488c5aa6324a4ef991ad23646663e4d7cb7468bf: Status 404 returned error can't find the container with id 6792889cfb51332982504d10488c5aa6324a4ef991ad23646663e4d7cb7468bf Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.894856 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.895046 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:03:58.895031668 +0000 UTC m=+22.925002669 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.995813 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.995859 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.995877 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:56 crc kubenswrapper[4676]: I0124 00:03:56.995905 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996000 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996014 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996023 4676 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996062 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:58.996050281 +0000 UTC m=+23.026021282 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996108 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996118 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996124 4676 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996142 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:58.996136503 +0000 UTC m=+23.026107504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996168 4676 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996184 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:58.996179585 +0000 UTC m=+23.026150586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996218 4676 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:03:56 crc kubenswrapper[4676]: E0124 00:03:56.996235 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:03:58.996230766 +0000 UTC m=+23.026201767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.018678 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.038464 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.210265 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:12:14.519097545 +0000 UTC Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.254667 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.254702 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.254787 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.254859 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.263348 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.273546 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.366039 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.431581 4676 generic.go:334] "Generic (PLEG): container finished" podID="79ad333b-cf18-4ba3-b9d4-2f89c7c44354" containerID="0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe" exitCode=0 Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.431645 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" event={"ID":"79ad333b-cf18-4ba3-b9d4-2f89c7c44354","Type":"ContainerDied","Data":"0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.431668 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" event={"ID":"79ad333b-cf18-4ba3-b9d4-2f89c7c44354","Type":"ContainerStarted","Data":"5cd2eacc7418e20bee23791d1168847e95ffa559aa7a9f4b13fabb531e7b4d1a"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.433863 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x57xf" event={"ID":"b88e9d2e-35da-45a8-ac7e-22afd660ff9f","Type":"ContainerStarted","Data":"db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.433886 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x57xf" event={"ID":"b88e9d2e-35da-45a8-ac7e-22afd660ff9f","Type":"ContainerStarted","Data":"8f476b3316aeee36dfa281df4104b2636e890e125d49340c30276eddbcd20c67"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.435890 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.435941 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.435951 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"6792889cfb51332982504d10488c5aa6324a4ef991ad23646663e4d7cb7468bf"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.437162 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551" exitCode=0 Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.437195 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.437260 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"a4ecac14869f9e61b6cf328f1a06fe9b463abccd1352f43143e709699b48fbd7"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.438395 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.456200 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.475294 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.488474 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.499649 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.504651 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.523779 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.543535 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.550262 4676 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.551856 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.551887 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.551896 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.551994 4676 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.558301 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.560105 4676 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.560235 4676 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.561007 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.561030 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.561039 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.561054 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.561064 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.568953 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.572687 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.573959 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.575489 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.578478 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.578533 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.578546 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.578562 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.578571 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.586196 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.587580 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.589159 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.592144 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.592170 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.592180 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.592195 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.592428 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.598274 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.602967 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.605336 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.605361 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.605370 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.605406 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.605415 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.608936 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.623106 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.625685 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.627679 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.627773 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.627850 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.627914 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.627972 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.638094 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.639509 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.639658 4676 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.640886 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.640972 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.641035 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.641093 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.641144 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.650222 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.658253 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.662794 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.667308 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.673054 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.676149 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.682462 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.691205 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.702216 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.716236 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.728900 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.743472 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.743528 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.743539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.743554 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.743565 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.744399 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.757188 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.767867 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.776476 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.795821 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.846242 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.846273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.846285 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.846301 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.846312 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.885581 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4bcxm"] Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.886058 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:57 crc kubenswrapper[4676]: W0124 00:03:57.888886 4676 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: configmaps "image-registry-certificates" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.888936 4676 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-registry-certificates\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 00:03:57 crc kubenswrapper[4676]: W0124 00:03:57.889705 4676 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.889749 4676 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 00:03:57 crc kubenswrapper[4676]: W0124 00:03:57.889704 4676 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 24 00:03:57 crc kubenswrapper[4676]: E0124 00:03:57.890202 4676 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.900400 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.914341 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.930803 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.948473 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.948510 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.948522 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.948538 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.948550 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:57Z","lastTransitionTime":"2026-01-24T00:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.953716 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.972022 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.984019 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:57 crc kubenswrapper[4676]: I0124 00:03:57.997169 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:57Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.006280 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzmhq\" (UniqueName: \"kubernetes.io/projected/bc086f6b-af67-49e4-97c8-f8b70f19e49a-kube-api-access-gzmhq\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.006324 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bc086f6b-af67-49e4-97c8-f8b70f19e49a-serviceca\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.006343 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc086f6b-af67-49e4-97c8-f8b70f19e49a-host\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.012883 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.026035 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.038406 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.050542 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.050571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.050579 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.050593 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.050601 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.051368 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.068023 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.086567 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.107498 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bc086f6b-af67-49e4-97c8-f8b70f19e49a-serviceca\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.107546 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc086f6b-af67-49e4-97c8-f8b70f19e49a-host\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.107632 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzmhq\" (UniqueName: \"kubernetes.io/projected/bc086f6b-af67-49e4-97c8-f8b70f19e49a-kube-api-access-gzmhq\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.107810 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc086f6b-af67-49e4-97c8-f8b70f19e49a-host\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.127642 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.140454 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.152439 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.152460 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.152468 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.152483 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.152492 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.156791 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.208961 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.210596 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:30:46.94354132 +0000 UTC Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.213790 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.254662 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.254692 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.254701 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.254717 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.254729 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.254824 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:58 crc kubenswrapper[4676]: E0124 00:03:58.254922 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.258116 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.295344 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.330294 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.357132 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.357158 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.357168 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.357182 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.357192 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.367567 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.407190 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.444347 4676 generic.go:334] "Generic (PLEG): container finished" podID="79ad333b-cf18-4ba3-b9d4-2f89c7c44354" containerID="dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9" exitCode=0 Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.444409 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" event={"ID":"79ad333b-cf18-4ba3-b9d4-2f89c7c44354","Type":"ContainerDied","Data":"dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.450841 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.450875 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.450886 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.450898 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.450909 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.450922 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.453536 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.461201 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.461234 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.461244 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.461260 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.461270 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.488677 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.530333 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.562721 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.562747 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.562755 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.562768 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.562777 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.568070 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.612904 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.650914 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.665438 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.665493 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.665507 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.665531 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.665544 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.690369 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.726004 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.767668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.767723 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.767737 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.767757 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.767770 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.774894 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.806598 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.839893 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.870553 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.870601 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.870609 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.870628 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.870637 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.877219 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.910651 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.918670 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:03:58 crc kubenswrapper[4676]: E0124 00:03:58.918808 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:02.918782965 +0000 UTC m=+26.948753966 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.955331 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.973397 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.973448 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.973463 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.973483 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.973500 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:58Z","lastTransitionTime":"2026-01-24T00:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:58 crc kubenswrapper[4676]: I0124 00:03:58.989813 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.020011 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.020251 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020242 4676 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.020360 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020501 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:03.020474659 +0000 UTC m=+27.050445660 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020349 4676 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.020588 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020697 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:03.020636364 +0000 UTC m=+27.050607365 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020745 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020765 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020785 4676 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020824 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:03.020816629 +0000 UTC m=+27.050787630 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.020940 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.021018 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.021090 4676 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.021198 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:03.02118409 +0000 UTC m=+27.051155091 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.029341 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.066439 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.076439 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.076511 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.076526 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.076552 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.076572 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.108492 4676 configmap.go:193] Couldn't get configMap openshift-image-registry/image-registry-certificates: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.108613 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc086f6b-af67-49e4-97c8-f8b70f19e49a-serviceca podName:bc086f6b-af67-49e4-97c8-f8b70f19e49a nodeName:}" failed. No retries permitted until 2026-01-24 00:03:59.608587267 +0000 UTC m=+23.638558268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/bc086f6b-af67-49e4-97c8-f8b70f19e49a-serviceca") pod "node-ca-4bcxm" (UID: "bc086f6b-af67-49e4-97c8-f8b70f19e49a") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.151172 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.172295 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.178989 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.179083 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.179098 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.179123 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.179143 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.196424 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.210815 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:11:08.689485911 +0000 UTC Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.235637 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.239431 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.249834 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzmhq\" (UniqueName: \"kubernetes.io/projected/bc086f6b-af67-49e4-97c8-f8b70f19e49a-kube-api-access-gzmhq\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.255145 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.255248 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.255311 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:03:59 crc kubenswrapper[4676]: E0124 00:03:59.255357 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.281601 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.281643 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.281653 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.281667 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.281676 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.289051 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.327472 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.339886 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.384329 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.384601 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.384705 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.384802 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.384898 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.385788 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.440412 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.457858 4676 generic.go:334] "Generic (PLEG): container finished" podID="79ad333b-cf18-4ba3-b9d4-2f89c7c44354" containerID="6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5" exitCode=0 Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.457902 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" event={"ID":"79ad333b-cf18-4ba3-b9d4-2f89c7c44354","Type":"ContainerDied","Data":"6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.481189 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.486928 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.487121 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.487249 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.487413 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.487498 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.512299 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.558516 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.587852 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.590050 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.590079 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.590091 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.590109 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.590121 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.637192 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.641042 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.644910 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.649835 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bc086f6b-af67-49e4-97c8-f8b70f19e49a-serviceca\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.650694 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bc086f6b-af67-49e4-97c8-f8b70f19e49a-serviceca\") pod \"node-ca-4bcxm\" (UID: \"bc086f6b-af67-49e4-97c8-f8b70f19e49a\") " pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.667979 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.692231 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.692268 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.692277 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.692293 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.692302 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.699480 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4bcxm" Jan 24 00:03:59 crc kubenswrapper[4676]: W0124 00:03:59.719543 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc086f6b_af67_49e4_97c8_f8b70f19e49a.slice/crio-422ee2aec88f6d9695479975fa9bf7b2c3d3eab9e3d87f14e41463e52aeea3b8 WatchSource:0}: Error finding container 422ee2aec88f6d9695479975fa9bf7b2c3d3eab9e3d87f14e41463e52aeea3b8: Status 404 returned error can't find the container with id 422ee2aec88f6d9695479975fa9bf7b2c3d3eab9e3d87f14e41463e52aeea3b8 Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.734284 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.752420 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.788513 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.795345 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.795410 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.795422 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.795444 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.795455 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.833323 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.868972 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.901950 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.901976 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.901983 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.901996 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.902004 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:03:59Z","lastTransitionTime":"2026-01-24T00:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.912777 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.946223 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:03:59 crc kubenswrapper[4676]: I0124 00:03:59.987903 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:03:59Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.003921 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.004088 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.004145 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.004204 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.004273 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.027046 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.067333 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.106508 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.106574 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.106595 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.106620 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.106637 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.110080 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.146713 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.191615 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.208817 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.208854 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.208864 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.208880 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.208891 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.210967 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:32:35.258624193 +0000 UTC Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.225990 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.255512 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:00 crc kubenswrapper[4676]: E0124 00:04:00.255638 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.275699 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.309687 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.310940 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.310994 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.311004 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.311019 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.311031 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.349931 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.390654 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.414050 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.414090 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.414100 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.414116 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.414127 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.430560 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.464775 4676 generic.go:334] "Generic (PLEG): container finished" podID="79ad333b-cf18-4ba3-b9d4-2f89c7c44354" containerID="1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44" exitCode=0 Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.464810 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" event={"ID":"79ad333b-cf18-4ba3-b9d4-2f89c7c44354","Type":"ContainerDied","Data":"1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.466862 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4bcxm" event={"ID":"bc086f6b-af67-49e4-97c8-f8b70f19e49a","Type":"ContainerStarted","Data":"f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.466907 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4bcxm" event={"ID":"bc086f6b-af67-49e4-97c8-f8b70f19e49a","Type":"ContainerStarted","Data":"422ee2aec88f6d9695479975fa9bf7b2c3d3eab9e3d87f14e41463e52aeea3b8"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.474216 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.513640 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.516967 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.517002 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.517018 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.517037 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.517050 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.546687 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.587841 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.620212 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.620247 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.620256 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.620270 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.620280 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.636749 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.673253 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.708302 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.722518 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.722571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.722586 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.722606 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.722623 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.764283 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.796497 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.824933 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.824969 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.824979 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.824995 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.825007 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.836601 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.868147 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.925803 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.926798 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.926826 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.926835 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.926848 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.926858 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:00Z","lastTransitionTime":"2026-01-24T00:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.957704 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:00 crc kubenswrapper[4676]: I0124 00:04:00.989332 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:00Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.028904 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.029813 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.029879 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.029900 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.029926 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.029944 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.073640 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.111827 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.132342 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.132404 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.132416 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.132434 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.132445 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.154355 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.189874 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.211538 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:32:18.513892574 +0000 UTC Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.227979 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.235001 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.235056 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.235070 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.235091 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.235103 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.255331 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.255331 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:01 crc kubenswrapper[4676]: E0124 00:04:01.255446 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:01 crc kubenswrapper[4676]: E0124 00:04:01.255511 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.337838 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.337899 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.337919 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.337946 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.337964 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.440676 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.440737 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.440756 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.440780 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.440796 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.475443 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.478560 4676 generic.go:334] "Generic (PLEG): container finished" podID="79ad333b-cf18-4ba3-b9d4-2f89c7c44354" containerID="5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d" exitCode=0 Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.478591 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" event={"ID":"79ad333b-cf18-4ba3-b9d4-2f89c7c44354","Type":"ContainerDied","Data":"5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.493483 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.512873 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.537129 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.546190 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.546256 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.546284 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.546319 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.546332 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.553063 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.566189 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.578870 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.592755 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.604994 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.626088 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.642587 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.648165 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.648187 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.648195 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.648207 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.648216 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.673999 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.708749 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.746108 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.750394 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.750420 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.750429 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.750442 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.750456 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.786017 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.825422 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:01Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.853915 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.853973 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.853994 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.854024 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.854043 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.956492 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.956559 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.956581 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.956612 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:01 crc kubenswrapper[4676]: I0124 00:04:01.956646 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:01Z","lastTransitionTime":"2026-01-24T00:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.059444 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.059499 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.059562 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.059591 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.059600 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.161479 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.161515 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.161523 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.161535 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.161544 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.212201 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:28:09.969257689 +0000 UTC Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.255663 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:02 crc kubenswrapper[4676]: E0124 00:04:02.255821 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.264216 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.264268 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.264285 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.264307 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.264324 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.367213 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.367271 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.367293 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.367322 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.367342 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.470042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.470082 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.470093 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.470112 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.470124 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.483669 4676 generic.go:334] "Generic (PLEG): container finished" podID="79ad333b-cf18-4ba3-b9d4-2f89c7c44354" containerID="8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239" exitCode=0 Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.483713 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" event={"ID":"79ad333b-cf18-4ba3-b9d4-2f89c7c44354","Type":"ContainerDied","Data":"8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.497670 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.515262 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.530942 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.540659 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.553546 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.566227 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.572559 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.572595 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.572604 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.572621 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.572629 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.578797 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.589274 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.609518 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.624652 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.632757 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.650532 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.666266 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.674079 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.674115 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.674126 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.674144 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.674157 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.679327 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.690730 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:02Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.776739 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.776795 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.776812 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.776834 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.776850 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.880474 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.880514 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.880523 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.880538 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.880547 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.983677 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.983774 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.983791 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.983812 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.983827 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:02Z","lastTransitionTime":"2026-01-24T00:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:02 crc kubenswrapper[4676]: I0124 00:04:02.985466 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:04:02 crc kubenswrapper[4676]: E0124 00:04:02.985786 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:10.985739072 +0000 UTC m=+35.015710133 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.085861 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.085894 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.085923 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.085939 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.085940 4676 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.085990 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:11.085973013 +0000 UTC m=+35.115944014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086020 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086031 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086040 4676 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086064 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:11.086056575 +0000 UTC m=+35.116027576 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086101 4676 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086119 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:11.086113277 +0000 UTC m=+35.116084278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086156 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086163 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086170 4676 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.086189 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:11.086182149 +0000 UTC m=+35.116153150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.086955 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.086972 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.086981 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.086992 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.087000 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.188367 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.188418 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.188431 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.188459 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.188472 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.212585 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:06:44.982662844 +0000 UTC Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.254887 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.254970 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.255026 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:03 crc kubenswrapper[4676]: E0124 00:04:03.255086 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.290341 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.290441 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.290495 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.290520 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.290543 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.393676 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.393725 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.393740 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.393761 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.393778 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.492166 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" event={"ID":"79ad333b-cf18-4ba3-b9d4-2f89c7c44354","Type":"ContainerStarted","Data":"c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.496806 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.496833 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.496841 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.496855 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.496864 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.499786 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.500134 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.513049 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.528300 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.530556 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.547692 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.562996 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.581307 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.599853 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.599922 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.599951 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.599982 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.600004 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.600786 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.614680 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.634002 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.653363 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.664088 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.678932 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.701669 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.704212 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.704243 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.704252 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.704265 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.704275 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.722893 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.756437 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.780184 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.806519 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.806568 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.806581 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.806599 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.806611 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.807248 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.827877 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.843629 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.859680 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.869497 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.881341 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.889821 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.901119 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.909131 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.909163 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.909171 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.909189 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.909200 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:03Z","lastTransitionTime":"2026-01-24T00:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.914539 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.928115 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.941110 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.951515 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.960720 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.971310 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:03 crc kubenswrapper[4676]: I0124 00:04:03.981631 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:03Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.011256 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.011317 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.011329 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.011345 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.011355 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.114723 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.114782 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.114801 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.114827 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.114845 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.213339 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:02:10.177478845 +0000 UTC Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.217571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.217613 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.217625 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.217642 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.217653 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.255192 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:04 crc kubenswrapper[4676]: E0124 00:04:04.255460 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.320926 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.320982 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.320994 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.321014 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.321032 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.423975 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.424038 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.424050 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.424073 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.424087 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.502814 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.503371 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.526839 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.526878 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.526892 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.526913 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.526928 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.534095 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.559236 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.581595 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.601894 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.617807 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.629246 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.629289 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.629300 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.629322 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.629334 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.635853 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.646433 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.658666 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.673191 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.689548 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.703557 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.716462 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.728021 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.731961 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.732010 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.732020 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.732042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.732055 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.742359 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.754403 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.774849 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:04Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.834623 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.834694 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.834703 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.834725 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.834740 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.937621 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.937942 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.938042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.938126 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:04 crc kubenswrapper[4676]: I0124 00:04:04.938202 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:04Z","lastTransitionTime":"2026-01-24T00:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.041692 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.041766 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.041784 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.041814 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.041828 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.143706 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.143741 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.143749 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.143762 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.143771 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.213666 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:26:29.049475747 +0000 UTC Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.246481 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.246518 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.246530 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.246546 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.246559 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.254743 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:05 crc kubenswrapper[4676]: E0124 00:04:05.254843 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.254926 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:05 crc kubenswrapper[4676]: E0124 00:04:05.255138 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.357049 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.357095 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.357105 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.357120 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.357131 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.459105 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.459146 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.459156 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.459171 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.459182 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.508507 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/0.log" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.511943 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886" exitCode=1 Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.511991 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.513126 4676 scope.go:117] "RemoveContainer" containerID="b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.527897 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.539530 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.556772 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:05Z\\\",\\\"message\\\":\\\"\\\\nI0124 00:04:05.114978 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:05.115011 5895 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 00:04:05.115016 5895 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 00:04:05.115045 5895 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 00:04:05.115144 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:05.115169 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:05.115193 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:05.115212 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:05.115222 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 00:04:05.115431 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:05.115445 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:05.115449 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:05.115465 5895 factory.go:656] Stopping watch factory\\\\nI0124 00:04:05.115487 5895 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:05.115507 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0124 00:04:05.115523 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.561688 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.561725 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.561736 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.561753 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.561766 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.571401 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.587155 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.598419 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.613364 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.627973 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.647822 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.663957 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.663996 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.664006 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.664022 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.664031 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.665369 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.680576 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.692711 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.704402 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.722008 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.738283 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:05Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.765597 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.765631 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.765641 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.765655 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.765668 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.868136 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.868199 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.868216 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.868240 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.868256 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.970102 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.970136 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.970145 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.970160 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:05 crc kubenswrapper[4676]: I0124 00:04:05.970169 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:05Z","lastTransitionTime":"2026-01-24T00:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.072217 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.072296 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.072314 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.072340 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.072358 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.174640 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.174689 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.174706 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.174732 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.174750 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.214367 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:31:02.700793703 +0000 UTC Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.255272 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:06 crc kubenswrapper[4676]: E0124 00:04:06.255427 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.270739 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.281416 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.281455 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.281467 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.281483 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.281493 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.283335 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.321785 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:05Z\\\",\\\"message\\\":\\\"\\\\nI0124 00:04:05.114978 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:05.115011 5895 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 00:04:05.115016 5895 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 00:04:05.115045 5895 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 00:04:05.115144 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:05.115169 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:05.115193 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:05.115212 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:05.115222 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 00:04:05.115431 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:05.115445 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:05.115449 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:05.115465 5895 factory.go:656] Stopping watch factory\\\\nI0124 00:04:05.115487 5895 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:05.115507 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0124 00:04:05.115523 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.343892 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.360713 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.376043 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.384534 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.384590 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.384603 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.384628 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.384660 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.392402 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.413631 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.425062 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.443120 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.459294 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.476742 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.486926 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.486974 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.486985 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.487001 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.487011 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.491481 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.504607 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.517932 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/0.log" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.517919 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.521164 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.521284 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.548270 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.561813 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.577988 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.589457 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.589681 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.589761 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.589851 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.589933 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.595930 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.615769 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.629336 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.642958 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.679830 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.692843 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.692884 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.692895 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.692913 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.692927 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.700392 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.719913 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.738540 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.781456 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.796116 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.796166 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.796179 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.796197 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.796211 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.803676 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.821468 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.856898 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:05Z\\\",\\\"message\\\":\\\"\\\\nI0124 00:04:05.114978 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:05.115011 5895 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 00:04:05.115016 5895 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 00:04:05.115045 5895 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 00:04:05.115144 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:05.115169 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:05.115193 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:05.115212 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:05.115222 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 00:04:05.115431 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:05.115445 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:05.115449 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:05.115465 5895 factory.go:656] Stopping watch factory\\\\nI0124 00:04:05.115487 5895 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:05.115507 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0124 00:04:05.115523 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.898368 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.898443 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.898459 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.898481 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:06 crc kubenswrapper[4676]: I0124 00:04:06.898499 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:06Z","lastTransitionTime":"2026-01-24T00:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.000905 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.000968 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.000985 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.001009 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.001026 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.103313 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.103421 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.103451 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.103481 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.103498 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.205877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.205928 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.205945 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.205968 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.205984 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.215018 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:24:29.402719641 +0000 UTC Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.254873 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.254873 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.255150 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.255027 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.309695 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.309762 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.309780 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.309815 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.309833 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.412792 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.412862 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.412886 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.412910 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.412927 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.515814 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.515875 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.515892 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.515918 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.515935 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.526588 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/1.log" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.527289 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/0.log" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.530259 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a" exitCode=1 Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.530312 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.530356 4676 scope.go:117] "RemoveContainer" containerID="b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.531421 4676 scope.go:117] "RemoveContainer" containerID="901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a" Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.531660 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.550862 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.567999 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.598483 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8471192d819a4b266cdc87f77c8ca851671825cfe5b82b786bae9b8f73f1886\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:05Z\\\",\\\"message\\\":\\\"\\\\nI0124 00:04:05.114978 5895 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:05.115011 5895 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 00:04:05.115016 5895 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 00:04:05.115045 5895 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 00:04:05.115144 5895 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:05.115169 5895 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:05.115193 5895 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:05.115212 5895 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:05.115222 5895 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 00:04:05.115431 5895 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:05.115445 5895 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:05.115449 5895 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:05.115465 5895 factory.go:656] Stopping watch factory\\\\nI0124 00:04:05.115487 5895 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:05.115507 5895 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0124 00:04:05.115523 5895 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.619575 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.619867 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.620003 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.620125 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.620236 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.621963 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.644084 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.655320 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.678295 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.689227 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.689290 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.689303 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.689322 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.689349 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.693236 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.706408 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.709234 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.711216 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.711405 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.711441 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.711468 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.711486 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.727528 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.728801 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.731566 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.731612 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.731628 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.731652 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.731669 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.742504 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.745266 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.751516 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.751647 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.751734 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.751817 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.751913 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.753101 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.767715 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.770255 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.773232 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.773259 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.773267 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.773281 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.773302 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.780436 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.785412 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: E0124 00:04:07.785518 4676 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.786899 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.786958 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.786968 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.786981 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.786990 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.795542 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:07Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.889759 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.890018 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.890128 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.890305 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.890424 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.993908 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.993974 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.993997 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.994028 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:07 crc kubenswrapper[4676]: I0124 00:04:07.994053 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:07Z","lastTransitionTime":"2026-01-24T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.097661 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.097714 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.097728 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.097750 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.097765 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.199897 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.199938 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.199953 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.199970 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.199981 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.215555 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:57:35.449626457 +0000 UTC Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.255174 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:08 crc kubenswrapper[4676]: E0124 00:04:08.255444 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.302923 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.302987 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.303004 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.303027 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.303059 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.406068 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.406140 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.406166 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.406195 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.406216 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.509015 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.509055 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.509066 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.509082 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.509095 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.535125 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/1.log" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.538294 4676 scope.go:117] "RemoveContainer" containerID="901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a" Jan 24 00:04:08 crc kubenswrapper[4676]: E0124 00:04:08.538446 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.553078 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.565685 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.576677 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.590399 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.610935 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.610965 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.610973 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.611002 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.611012 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.658514 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.669974 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.683921 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.695122 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.712877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.712931 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.713046 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.713098 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.713117 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.716978 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.737122 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.753799 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.765647 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.776996 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.788546 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.797049 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:08Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.815700 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.815724 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.815732 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.815745 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.815769 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.918331 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.918410 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.918427 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.918448 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:08 crc kubenswrapper[4676]: I0124 00:04:08.918463 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:08Z","lastTransitionTime":"2026-01-24T00:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.021425 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.021473 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.021488 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.021507 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.021523 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.124591 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.124664 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.124682 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.124708 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.124727 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.216218 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:12:35.154406132 +0000 UTC Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.227652 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.227717 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.227732 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.227751 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.227766 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.254356 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts"] Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.255199 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.255248 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: E0124 00:04:09.255444 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.255487 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:09 crc kubenswrapper[4676]: E0124 00:04:09.255749 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.259102 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.262298 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.278405 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.297299 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.315102 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.330814 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.330845 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.330855 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.330868 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.330878 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.334798 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.345834 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4f8\" (UniqueName: \"kubernetes.io/projected/40151406-46c7-4668-8b2b-db0585847be9-kube-api-access-sj4f8\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.345909 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40151406-46c7-4668-8b2b-db0585847be9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.345975 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40151406-46c7-4668-8b2b-db0585847be9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.346003 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40151406-46c7-4668-8b2b-db0585847be9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.352726 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.365415 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.382207 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.399531 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.421715 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.433743 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.434007 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.434081 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.434203 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.434274 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.439760 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.446661 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40151406-46c7-4668-8b2b-db0585847be9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.446708 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40151406-46c7-4668-8b2b-db0585847be9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.446735 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj4f8\" (UniqueName: \"kubernetes.io/projected/40151406-46c7-4668-8b2b-db0585847be9-kube-api-access-sj4f8\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.446780 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40151406-46c7-4668-8b2b-db0585847be9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.447677 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40151406-46c7-4668-8b2b-db0585847be9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.447757 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40151406-46c7-4668-8b2b-db0585847be9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.457259 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40151406-46c7-4668-8b2b-db0585847be9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.459200 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.463796 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj4f8\" (UniqueName: \"kubernetes.io/projected/40151406-46c7-4668-8b2b-db0585847be9-kube-api-access-sj4f8\") pod \"ovnkube-control-plane-749d76644c-7m8ts\" (UID: \"40151406-46c7-4668-8b2b-db0585847be9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.473902 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.487004 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.499542 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.527540 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.536467 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.536533 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.536555 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.536586 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.536607 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.559313 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.569592 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" Jan 24 00:04:09 crc kubenswrapper[4676]: W0124 00:04:09.587802 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40151406_46c7_4668_8b2b_db0585847be9.slice/crio-0ac6ade2d0240e340fe022a220fd0929fae2090fb4532fd0c2cd300e5c65aa22 WatchSource:0}: Error finding container 0ac6ade2d0240e340fe022a220fd0929fae2090fb4532fd0c2cd300e5c65aa22: Status 404 returned error can't find the container with id 0ac6ade2d0240e340fe022a220fd0929fae2090fb4532fd0c2cd300e5c65aa22 Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.639010 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.639046 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.639058 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.639074 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.639086 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.741834 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.741874 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.741886 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.741902 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.741915 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.844112 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.844370 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.844393 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.844409 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.844420 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.947148 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.947193 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.947209 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.947232 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:09 crc kubenswrapper[4676]: I0124 00:04:09.947249 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:09Z","lastTransitionTime":"2026-01-24T00:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.049940 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.050167 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.050260 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.050350 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.050448 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.153239 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.153279 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.153288 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.153304 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.153314 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.216996 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:25:07.113228846 +0000 UTC Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.254606 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:10 crc kubenswrapper[4676]: E0124 00:04:10.254696 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.255048 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.255136 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.255200 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.255301 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.255371 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.357635 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.357690 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.357703 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.357723 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.357739 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.402368 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-r4q22"] Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.403090 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:10 crc kubenswrapper[4676]: E0124 00:04:10.403186 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.420589 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.438981 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.460686 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.461007 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsw85\" (UniqueName: \"kubernetes.io/projected/18335446-e572-4741-ad9e-e7aadee7550b-kube-api-access-tsw85\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.461280 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.462821 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.462875 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.462887 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.462907 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.462919 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.484171 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.502438 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.517214 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.530401 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.547124 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.547404 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" event={"ID":"40151406-46c7-4668-8b2b-db0585847be9","Type":"ContainerStarted","Data":"7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.547451 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" event={"ID":"40151406-46c7-4668-8b2b-db0585847be9","Type":"ContainerStarted","Data":"4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.547465 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" event={"ID":"40151406-46c7-4668-8b2b-db0585847be9","Type":"ContainerStarted","Data":"0ac6ade2d0240e340fe022a220fd0929fae2090fb4532fd0c2cd300e5c65aa22"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.561346 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.562279 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsw85\" (UniqueName: \"kubernetes.io/projected/18335446-e572-4741-ad9e-e7aadee7550b-kube-api-access-tsw85\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.562572 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:10 crc kubenswrapper[4676]: E0124 00:04:10.562782 4676 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:10 crc kubenswrapper[4676]: E0124 00:04:10.562924 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs podName:18335446-e572-4741-ad9e-e7aadee7550b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:11.062891328 +0000 UTC m=+35.092862369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs") pod "network-metrics-daemon-r4q22" (UID: "18335446-e572-4741-ad9e-e7aadee7550b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.565540 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.565603 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.565626 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.565657 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.565679 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.589596 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsw85\" (UniqueName: \"kubernetes.io/projected/18335446-e572-4741-ad9e-e7aadee7550b-kube-api-access-tsw85\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.595196 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.609708 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.625464 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.643747 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.660649 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.668779 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.668811 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.668825 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.668844 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.668857 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.676290 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.691566 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.705204 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.723212 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.754841 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.770916 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.770971 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.770980 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.770998 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.771007 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.773270 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.786603 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.799430 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.815047 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.822090 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.840978 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.864473 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.873254 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.873426 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.873519 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.873663 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.873803 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.881420 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.896168 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.911726 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.931542 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.950886 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.969006 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.975757 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.975869 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.975967 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.976060 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.976135 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:10Z","lastTransitionTime":"2026-01-24T00:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:10 crc kubenswrapper[4676]: I0124 00:04:10.983686 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:10.999897 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:10Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.018196 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.034874 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.063764 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.066845 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.067119 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:27.067078773 +0000 UTC m=+51.097049814 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.067208 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.067428 4676 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.067522 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs podName:18335446-e572-4741-ad9e-e7aadee7550b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:12.067497605 +0000 UTC m=+36.097468646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs") pod "network-metrics-daemon-r4q22" (UID: "18335446-e572-4741-ad9e-e7aadee7550b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.078668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.078731 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.078755 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.078786 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.078811 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.086117 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.106075 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.126644 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.155049 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.168209 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.168274 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.168331 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.168365 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168426 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168457 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168474 4676 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168502 4676 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168538 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:27.16851717 +0000 UTC m=+51.198488191 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168605 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168619 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168630 4676 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168662 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:27.168651054 +0000 UTC m=+51.198622065 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168698 4676 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168721 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:27.168713686 +0000 UTC m=+51.198684697 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.168739 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:27.168733016 +0000 UTC m=+51.198704027 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.169112 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.181177 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.181215 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.181228 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.181245 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.181257 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.200701 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.217884 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.218178 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:29:40.747970379 +0000 UTC Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.238014 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.255509 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.255552 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.255523 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.255679 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:11 crc kubenswrapper[4676]: E0124 00:04:11.255730 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.270985 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.283513 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.283559 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.283575 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.283596 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.283614 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.288509 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.301012 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.315937 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.331468 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.348534 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:11Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.385813 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.385854 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.385865 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.385879 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.385891 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.488255 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.488299 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.488307 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.488321 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.488331 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.591033 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.591085 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.591101 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.591120 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.591131 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.694211 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.694265 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.694277 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.694296 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.694310 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.796712 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.796750 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.796759 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.796772 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.796782 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.899867 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.899941 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.899979 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.900011 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:11 crc kubenswrapper[4676]: I0124 00:04:11.900034 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:11Z","lastTransitionTime":"2026-01-24T00:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.003250 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.003301 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.003318 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.003343 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.003359 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.078047 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:12 crc kubenswrapper[4676]: E0124 00:04:12.078186 4676 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:12 crc kubenswrapper[4676]: E0124 00:04:12.078240 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs podName:18335446-e572-4741-ad9e-e7aadee7550b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:14.078223665 +0000 UTC m=+38.108194676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs") pod "network-metrics-daemon-r4q22" (UID: "18335446-e572-4741-ad9e-e7aadee7550b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.106551 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.106616 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.106629 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.106653 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.106670 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.210294 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.210363 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.210427 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.210452 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.210466 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.218909 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:44:53.662524312 +0000 UTC Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.255637 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.255819 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:12 crc kubenswrapper[4676]: E0124 00:04:12.255962 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:12 crc kubenswrapper[4676]: E0124 00:04:12.256206 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.312967 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.313029 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.313044 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.313070 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.313085 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.416674 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.416745 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.416757 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.416777 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.416795 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.525879 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.525944 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.525961 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.525987 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.526008 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.629234 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.629297 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.629317 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.629361 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.629420 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.732104 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.732157 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.732173 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.732199 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.732215 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.836002 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.836080 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.836103 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.836133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.836156 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.938677 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.938748 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.938772 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.938796 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:12 crc kubenswrapper[4676]: I0124 00:04:12.938814 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:12Z","lastTransitionTime":"2026-01-24T00:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.041372 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.041500 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.041524 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.041557 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.041580 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.144776 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.144891 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.144915 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.144945 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.144967 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.219987 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:53:17.713937971 +0000 UTC Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.248222 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.248272 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.248294 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.248322 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.248348 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.254892 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.254895 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:13 crc kubenswrapper[4676]: E0124 00:04:13.255035 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:13 crc kubenswrapper[4676]: E0124 00:04:13.255141 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.351133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.351187 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.351223 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.351243 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.351258 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.454044 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.454109 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.454123 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.454140 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.454153 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.557894 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.557974 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.558001 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.558031 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.558053 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.660446 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.660507 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.660518 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.660537 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.660549 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.763168 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.763230 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.763245 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.763262 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.763274 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.865933 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.865975 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.865997 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.866015 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.866027 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.968768 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.968826 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.968843 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.968869 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:13 crc kubenswrapper[4676]: I0124 00:04:13.968886 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:13Z","lastTransitionTime":"2026-01-24T00:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.071709 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.071766 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.071788 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.071817 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.071842 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.100759 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:14 crc kubenswrapper[4676]: E0124 00:04:14.100940 4676 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:14 crc kubenswrapper[4676]: E0124 00:04:14.100998 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs podName:18335446-e572-4741-ad9e-e7aadee7550b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:18.100983803 +0000 UTC m=+42.130954804 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs") pod "network-metrics-daemon-r4q22" (UID: "18335446-e572-4741-ad9e-e7aadee7550b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.174846 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.174906 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.174923 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.174947 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.174965 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.220668 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:36:40.069944354 +0000 UTC Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.254991 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.255059 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:14 crc kubenswrapper[4676]: E0124 00:04:14.255193 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:14 crc kubenswrapper[4676]: E0124 00:04:14.255336 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.277198 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.277255 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.277273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.277296 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.277339 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.379890 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.379966 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.379984 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.380011 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.380030 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.482434 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.482469 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.482476 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.482488 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.482498 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.585143 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.585177 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.585187 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.585202 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.585212 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.688495 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.688577 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.688611 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.688643 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.688662 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.791219 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.791305 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.791324 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.791370 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.791408 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.894418 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.894479 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.894497 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.894522 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.894539 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.997579 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.997642 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.997660 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.997689 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:14 crc kubenswrapper[4676]: I0124 00:04:14.997714 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:14Z","lastTransitionTime":"2026-01-24T00:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.101868 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.101943 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.101962 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.101988 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.102007 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.204034 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.204077 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.204087 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.204103 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.204114 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.221493 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:24:37.995985575 +0000 UTC Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.254840 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.254858 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:15 crc kubenswrapper[4676]: E0124 00:04:15.255023 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:15 crc kubenswrapper[4676]: E0124 00:04:15.255124 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.307353 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.307481 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.307507 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.307539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.307560 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.411821 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.411877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.411894 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.411919 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.411938 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.515579 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.515632 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.515651 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.515677 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.515696 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.619142 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.619221 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.619319 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.619411 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.619440 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.722649 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.722685 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.722697 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.722713 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.722724 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.825771 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.825829 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.825846 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.825870 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.825891 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.928677 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.928751 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.928767 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.928788 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:15 crc kubenswrapper[4676]: I0124 00:04:15.928831 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:15Z","lastTransitionTime":"2026-01-24T00:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.031605 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.031655 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.031701 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.031724 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.031740 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.135498 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.135553 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.135575 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.135601 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.135618 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.222401 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 13:10:50.34454338 +0000 UTC Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.238927 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.238971 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.238983 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.239005 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.239017 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.255507 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.255523 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:16 crc kubenswrapper[4676]: E0124 00:04:16.255647 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:16 crc kubenswrapper[4676]: E0124 00:04:16.255909 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.270997 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.287659 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.308983 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.333979 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.342476 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.342539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.342559 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.342584 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.342602 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.352339 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.369321 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.385944 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.401581 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.417818 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.444059 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.445848 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.445923 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.445941 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.445966 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.445985 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.461486 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.494261 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.510000 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.523527 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.539288 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.548167 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.548194 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.548202 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.548216 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.548225 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.558190 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.567907 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:16Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.650718 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.650961 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.650971 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.650986 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.650995 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.753893 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.753988 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.754010 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.754066 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.754084 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.856771 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.856845 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.856862 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.856887 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.856908 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.959804 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.959858 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.959873 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.959897 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:16 crc kubenswrapper[4676]: I0124 00:04:16.959919 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:16Z","lastTransitionTime":"2026-01-24T00:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.062915 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.062989 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.063011 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.063039 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.063060 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.165734 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.165806 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.165829 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.165858 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.165879 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.222885 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:46:59.064603595 +0000 UTC Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.255504 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:17 crc kubenswrapper[4676]: E0124 00:04:17.256098 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.255548 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:17 crc kubenswrapper[4676]: E0124 00:04:17.256290 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.270042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.270121 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.270143 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.270174 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.270193 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.373365 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.373472 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.373493 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.373518 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.373539 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.476316 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.476411 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.476425 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.476449 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.476462 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.579174 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.579235 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.579255 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.579320 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.579341 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.683021 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.683093 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.683115 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.683145 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.683162 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.786982 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.787060 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.787079 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.787105 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.787125 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.890252 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.890313 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.890331 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.890354 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.890419 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.993289 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.993471 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.993499 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.993536 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:17 crc kubenswrapper[4676]: I0124 00:04:17.993562 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:17Z","lastTransitionTime":"2026-01-24T00:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.068108 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.068156 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.068171 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.068188 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.068200 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.085297 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:18Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.090347 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.090408 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.090423 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.090445 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.090458 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.101934 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:18Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.105974 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.106019 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.106032 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.106052 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.106063 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.119402 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:18Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.123691 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.123737 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.123750 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.123770 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.123784 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.135138 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:18Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.138684 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.138712 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.138721 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.138743 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.138755 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.146216 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.146442 4676 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.146546 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs podName:18335446-e572-4741-ad9e-e7aadee7550b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:26.14652001 +0000 UTC m=+50.176491011 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs") pod "network-metrics-daemon-r4q22" (UID: "18335446-e572-4741-ad9e-e7aadee7550b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.149570 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:18Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.149726 4676 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.151330 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.151359 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.151370 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.151403 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.151417 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.224047 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:59:43.370394791 +0000 UTC Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.254558 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.254625 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.254648 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.254675 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.254696 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.254885 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.254976 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.255100 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:18 crc kubenswrapper[4676]: E0124 00:04:18.255216 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.357028 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.357074 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.357088 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.357108 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.357121 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.460117 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.460191 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.460216 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.460240 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.460264 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.563015 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.563088 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.563110 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.563138 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.563159 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.666066 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.666202 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.666227 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.666254 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.666274 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.770107 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.770252 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.770280 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.770342 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.770429 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.873910 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.873963 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.874008 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.874031 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.874049 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.976603 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.976646 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.976654 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.976668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:18 crc kubenswrapper[4676]: I0124 00:04:18.976679 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:18Z","lastTransitionTime":"2026-01-24T00:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.079145 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.079175 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.079185 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.079198 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.079208 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.178100 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.179422 4676 scope.go:117] "RemoveContainer" containerID="901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.181867 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.181898 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.181910 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.181927 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.181939 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.224752 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:29:49.248811921 +0000 UTC Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.254635 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.254660 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:19 crc kubenswrapper[4676]: E0124 00:04:19.254752 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:19 crc kubenswrapper[4676]: E0124 00:04:19.254869 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.284609 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.284654 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.284671 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.284693 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.284710 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.388147 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.388197 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.388216 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.388242 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.388259 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.491206 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.491269 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.491282 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.491308 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.491324 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.581897 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/1.log" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.584937 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.593184 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.593217 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.593227 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.593243 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.593254 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.696286 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.696317 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.696325 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.696339 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.696348 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.799866 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.799976 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.800053 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.800132 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.800165 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.903139 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.903181 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.903196 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.903214 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:19 crc kubenswrapper[4676]: I0124 00:04:19.903231 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:19Z","lastTransitionTime":"2026-01-24T00:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.005510 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.005542 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.005551 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.005564 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.005574 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.107783 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.107812 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.107820 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.107833 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.107842 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.209989 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.210020 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.210028 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.210042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.210050 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.225614 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:16:08.18041271 +0000 UTC Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.255021 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.255032 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:20 crc kubenswrapper[4676]: E0124 00:04:20.255163 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:20 crc kubenswrapper[4676]: E0124 00:04:20.255571 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.312539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.312571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.312581 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.312600 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.312614 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.415680 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.415717 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.415732 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.415753 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.415768 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.518115 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.518164 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.518182 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.518206 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.518226 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.589556 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.610291 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.621278 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.621332 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.621351 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.621402 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.621422 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.629131 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.657162 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.675684 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.685831 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.697339 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.707200 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.723871 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.723904 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.723912 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.723927 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.723936 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.728194 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.740629 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.752050 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.761937 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.778455 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.790063 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.804926 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.819620 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.825976 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.826025 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.826038 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.826057 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.826069 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.835162 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.846468 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:20Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.928459 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.928492 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.928503 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.928519 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:20 crc kubenswrapper[4676]: I0124 00:04:20.928529 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:20Z","lastTransitionTime":"2026-01-24T00:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.034654 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.034723 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.034741 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.034768 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.034793 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.138053 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.138109 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.138127 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.138152 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.138169 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.226335 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:40:59.247113713 +0000 UTC Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.240708 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.240759 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.240776 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.240800 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.240817 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.255661 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.255728 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:21 crc kubenswrapper[4676]: E0124 00:04:21.255827 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:21 crc kubenswrapper[4676]: E0124 00:04:21.256052 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.344343 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.344487 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.344509 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.344539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.344560 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.447822 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.447902 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.447923 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.447954 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.447973 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.550803 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.550860 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.550872 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.550891 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.550904 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.595651 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/2.log" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.596782 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/1.log" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.601128 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76" exitCode=1 Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.601196 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.601270 4676 scope.go:117] "RemoveContainer" containerID="901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.602504 4676 scope.go:117] "RemoveContainer" containerID="a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76" Jan 24 00:04:21 crc kubenswrapper[4676]: E0124 00:04:21.602885 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.624787 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.643958 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.652963 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.653032 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.653049 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.653069 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.653083 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.660685 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.676587 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.694597 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.711018 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.723428 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.735356 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.744725 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.755073 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.755358 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.755428 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.755453 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.755471 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.763896 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a2ef8ebf68032c2bbcb3d41f57a1ff938b482904ce7cbbb377f04418ff15a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:06Z\\\",\\\"message\\\":\\\"e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0124 00:04:06.463876 6013 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0124 00:04:06.464703 6013 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nI0124 00:04:06.460422 6013 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0124 00:04:06.458980 6013 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0124 00:04:06.464808 6013 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.775480 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.800473 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.816480 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.836984 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.847894 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.856940 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.856962 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.856969 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.856981 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.856990 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.861161 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.874359 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:21Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.959737 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.959765 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.959774 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.959785 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:21 crc kubenswrapper[4676]: I0124 00:04:21.959792 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:21Z","lastTransitionTime":"2026-01-24T00:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.062594 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.062653 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.062670 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.062692 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.062708 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.165858 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.165958 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.165974 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.165998 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.166017 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.227132 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:26:51.20875663 +0000 UTC Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.255711 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.255761 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:22 crc kubenswrapper[4676]: E0124 00:04:22.255879 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:22 crc kubenswrapper[4676]: E0124 00:04:22.256006 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.268057 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.268105 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.268123 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.268150 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.268173 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.371573 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.371642 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.371659 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.371690 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.371714 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.474282 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.474367 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.474430 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.474466 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.474489 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.576904 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.576971 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.576988 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.577011 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.577028 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.606766 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/2.log" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.612114 4676 scope.go:117] "RemoveContainer" containerID="a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76" Jan 24 00:04:22 crc kubenswrapper[4676]: E0124 00:04:22.612345 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.634431 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.651372 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.666520 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.680463 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.680487 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.680494 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.680507 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.680515 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.684834 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.701047 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.716101 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.729133 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.741931 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.755416 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.770648 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.783058 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.783117 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.783134 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.783192 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.783212 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.789420 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.807453 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.823484 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.837498 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.860088 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.874646 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.885313 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.885343 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.885352 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.885367 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.885392 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.903225 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:22Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.987820 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.987860 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.987871 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.987889 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:22 crc kubenswrapper[4676]: I0124 00:04:22.987900 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:22Z","lastTransitionTime":"2026-01-24T00:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.091182 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.091252 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.091278 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.091311 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.091334 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.193711 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.193773 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.193797 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.193825 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.193847 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.227714 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:56:32.301149403 +0000 UTC Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.255422 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.255433 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:23 crc kubenswrapper[4676]: E0124 00:04:23.255635 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:23 crc kubenswrapper[4676]: E0124 00:04:23.255741 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.296739 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.296801 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.296823 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.296854 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.296877 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.400005 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.400071 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.400088 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.400113 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.400131 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.503192 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.503257 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.503269 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.503291 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.503302 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.605852 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.605911 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.605925 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.605946 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.605960 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.708189 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.708311 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.708334 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.708465 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.708483 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.810836 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.810869 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.810877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.810889 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.810901 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.913941 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.914007 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.914025 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.914052 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:23 crc kubenswrapper[4676]: I0124 00:04:23.914071 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:23Z","lastTransitionTime":"2026-01-24T00:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.017354 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.017435 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.017454 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.017478 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.017495 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.120789 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.120844 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.120863 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.120888 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.120910 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.193778 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.206709 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.212701 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.223942 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.224058 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.224083 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.224149 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.224180 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.228272 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:37:50.3642817 +0000 UTC Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.233118 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.251629 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.255163 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.255163 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:24 crc kubenswrapper[4676]: E0124 00:04:24.255350 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:24 crc kubenswrapper[4676]: E0124 00:04:24.255516 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.272898 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.293568 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.315159 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.327027 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.327154 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.327173 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.327198 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.327217 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.331350 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.350291 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.367216 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.399678 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.418968 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.429923 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.430013 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.430054 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.430090 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.430111 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.445949 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.461062 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.483697 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.503235 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.520814 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.532684 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.532712 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.532720 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.532735 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.532745 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.536413 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:24Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.634706 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.634770 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.634787 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.634811 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.634834 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.737234 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.737289 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.737311 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.737336 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.737365 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.840059 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.840128 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.840151 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.840183 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.840207 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.943506 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.943564 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.943582 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.943611 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:24 crc kubenswrapper[4676]: I0124 00:04:24.943632 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:24Z","lastTransitionTime":"2026-01-24T00:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.046405 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.046445 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.046455 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.046471 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.046481 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.149419 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.149835 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.149927 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.150021 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.150109 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.228407 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:44:12.006115868 +0000 UTC Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.253046 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.253113 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.253130 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.253160 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.253180 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.255426 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:25 crc kubenswrapper[4676]: E0124 00:04:25.255597 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.255427 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:25 crc kubenswrapper[4676]: E0124 00:04:25.256165 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.356170 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.356232 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.356250 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.356276 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.356312 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.459306 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.459353 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.459372 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.459441 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.459458 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.562514 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.562571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.562592 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.562626 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.562650 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.665927 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.665981 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.665997 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.666021 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.666039 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.769146 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.769214 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.769233 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.769258 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.769275 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.872597 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.872694 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.872717 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.872744 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.872764 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.975678 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.975749 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.975767 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.975792 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:25 crc kubenswrapper[4676]: I0124 00:04:25.975810 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:25Z","lastTransitionTime":"2026-01-24T00:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.079072 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.079148 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.079171 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.079203 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.079225 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.182177 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.182248 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.182265 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.182292 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.182311 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.229044 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:42:26.9757949 +0000 UTC Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.235779 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:26 crc kubenswrapper[4676]: E0124 00:04:26.236047 4676 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:26 crc kubenswrapper[4676]: E0124 00:04:26.236138 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs podName:18335446-e572-4741-ad9e-e7aadee7550b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:42.236107119 +0000 UTC m=+66.266078150 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs") pod "network-metrics-daemon-r4q22" (UID: "18335446-e572-4741-ad9e-e7aadee7550b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.254842 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.255247 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:26 crc kubenswrapper[4676]: E0124 00:04:26.255629 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:26 crc kubenswrapper[4676]: E0124 00:04:26.256135 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.285581 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.285643 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.285700 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.285724 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.285740 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.289963 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.308705 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.327725 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.348019 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.376885 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.389424 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.389474 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.389492 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.389516 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.389533 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.393802 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.424093 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.443804 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.462413 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.480066 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.492192 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.492258 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.492283 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.492311 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.492333 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.500795 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.521204 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.544452 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.561485 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.583265 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.594313 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.594428 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.594453 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.594478 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.594495 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.599186 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.618997 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.635396 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:26Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.697179 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.697217 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.697226 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.697241 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.697252 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.799649 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.800006 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.800020 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.800038 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.800053 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.902425 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.902468 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.902480 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.902496 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:26 crc kubenswrapper[4676]: I0124 00:04:26.902509 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:26Z","lastTransitionTime":"2026-01-24T00:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.006171 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.006218 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.006230 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.006246 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.006257 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.109448 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.109554 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.109644 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.109709 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.109725 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.145422 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.145672 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:04:59.145634419 +0000 UTC m=+83.175605470 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.213358 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.213448 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.213467 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.213492 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.213511 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.229794 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:15:22.811558955 +0000 UTC Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.247278 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.247346 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.247434 4676 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.247520 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:59.247495749 +0000 UTC m=+83.277466790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.247565 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.247589 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.247606 4676 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.247664 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:59.247645193 +0000 UTC m=+83.277616234 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.247446 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.248294 4676 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.248502 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:59.248444857 +0000 UTC m=+83.278415898 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.247718 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.248625 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.248672 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.248702 4676 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.248783 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 00:04:59.248760636 +0000 UTC m=+83.278731667 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.255555 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.255570 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.255774 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:27 crc kubenswrapper[4676]: E0124 00:04:27.256105 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.317654 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.317828 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.317863 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.317894 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.317925 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.420960 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.421349 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.421528 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.421692 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.421881 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.524773 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.524819 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.524834 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.524854 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.524868 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.627783 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.627827 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.627843 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.627864 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.627879 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.730011 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.730264 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.730352 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.730451 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.730513 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.833279 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.833334 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.833348 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.833369 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.833404 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.935070 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.935289 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.935347 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.935430 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:27 crc kubenswrapper[4676]: I0124 00:04:27.935501 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:27Z","lastTransitionTime":"2026-01-24T00:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.038080 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.038115 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.038128 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.038144 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.038153 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.141296 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.141369 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.141437 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.141467 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.141489 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.230755 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:51:54.46834656 +0000 UTC Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.233129 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.233362 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.233570 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.233709 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.233851 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.254906 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.254992 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:28 crc kubenswrapper[4676]: E0124 00:04:28.255046 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:28 crc kubenswrapper[4676]: E0124 00:04:28.255177 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:28 crc kubenswrapper[4676]: E0124 00:04:28.256074 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:28Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.261210 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.261240 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.261250 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.261265 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.261275 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: E0124 00:04:28.280619 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:28Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.286197 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.286253 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.286272 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.286296 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.286314 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: E0124 00:04:28.303650 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:28Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.308726 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.308830 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.308847 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.308895 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.308916 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: E0124 00:04:28.329187 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:28Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.335052 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.335114 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.335134 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.335159 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.335181 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: E0124 00:04:28.352685 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:28Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:28 crc kubenswrapper[4676]: E0124 00:04:28.352932 4676 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.354979 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.355043 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.355062 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.355086 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.355104 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.457630 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.457689 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.457704 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.457726 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.457740 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.560691 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.560732 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.560748 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.560767 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.560782 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.663513 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.663570 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.663589 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.663616 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.663633 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.769457 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.769514 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.769531 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.769555 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.769572 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.875280 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.875325 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.875339 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.875356 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.875389 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.977705 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.977778 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.977796 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.977823 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:28 crc kubenswrapper[4676]: I0124 00:04:28.977841 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:28Z","lastTransitionTime":"2026-01-24T00:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.080090 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.080212 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.080240 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.080272 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.080297 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.183180 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.183550 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.183683 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.183789 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.183870 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.231447 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:36:41.61309151 +0000 UTC Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.254767 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.254767 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:29 crc kubenswrapper[4676]: E0124 00:04:29.255428 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:29 crc kubenswrapper[4676]: E0124 00:04:29.255889 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.286532 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.286572 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.286582 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.286596 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.286607 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.389413 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.389450 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.389462 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.389478 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.389490 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.492095 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.492497 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.492581 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.492654 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.492721 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.595387 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.595432 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.595443 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.595459 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.595473 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.698080 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.698148 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.698170 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.698200 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.698217 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.800902 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.800969 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.800993 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.801020 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.801041 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.904768 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.904828 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.904847 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.904872 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:29 crc kubenswrapper[4676]: I0124 00:04:29.904891 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:29Z","lastTransitionTime":"2026-01-24T00:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.007655 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.007782 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.007800 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.007829 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.007880 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.111816 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.111872 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.111890 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.111914 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.111934 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.215320 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.215368 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.215418 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.215441 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.215459 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.232065 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:23:27.535480944 +0000 UTC Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.255650 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.255704 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:30 crc kubenswrapper[4676]: E0124 00:04:30.256008 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:30 crc kubenswrapper[4676]: E0124 00:04:30.256137 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.317999 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.318337 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.318692 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.318978 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.319284 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.422016 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.422079 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.422101 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.422132 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.422154 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.525290 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.525343 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.525360 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.525407 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.525424 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.628206 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.628275 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.628298 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.628323 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.628340 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.731469 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.731809 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.731968 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.732120 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.732250 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.834855 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.834904 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.834921 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.834944 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.834962 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.937455 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.937504 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.937521 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.937544 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:30 crc kubenswrapper[4676]: I0124 00:04:30.937561 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:30Z","lastTransitionTime":"2026-01-24T00:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.040227 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.040284 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.040302 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.040321 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.040334 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.143111 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.143177 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.143195 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.143221 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.143237 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.283326 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:16:20.428641491 +0000 UTC Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.283525 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:31 crc kubenswrapper[4676]: E0124 00:04:31.283719 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.284470 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:31 crc kubenswrapper[4676]: E0124 00:04:31.284646 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.286005 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.286063 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.286081 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.286106 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.286123 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.389449 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.389535 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.389555 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.389611 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.389632 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.491910 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.491985 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.492007 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.492035 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.492056 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.602164 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.602239 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.602261 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.602290 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.602311 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.704618 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.704668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.704684 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.704706 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.704723 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.808125 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.808167 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.808175 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.808189 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.808200 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.911355 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.911421 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.911434 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.911461 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:31 crc kubenswrapper[4676]: I0124 00:04:31.911480 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:31Z","lastTransitionTime":"2026-01-24T00:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.014042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.014102 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.014115 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.014137 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.014155 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.117014 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.117050 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.117059 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.117075 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.117084 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.220588 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.220664 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.220679 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.220702 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.220737 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.255209 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.255236 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:32 crc kubenswrapper[4676]: E0124 00:04:32.255445 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:32 crc kubenswrapper[4676]: E0124 00:04:32.255521 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.284127 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:44:22.813873282 +0000 UTC Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.324444 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.324500 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.324510 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.324533 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.324546 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.427529 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.427619 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.427638 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.427663 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.427680 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.531194 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.531330 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.531439 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.531466 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.531484 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.636030 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.636095 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.636109 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.636133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.636149 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.740134 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.740226 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.740246 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.740277 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.740300 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.844066 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.844141 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.844158 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.844188 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.844204 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.947690 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.947746 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.947759 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.947776 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:32 crc kubenswrapper[4676]: I0124 00:04:32.947786 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:32Z","lastTransitionTime":"2026-01-24T00:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.051028 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.051100 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.051125 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.051156 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.051179 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.155293 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.155341 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.155356 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.155411 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.155434 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.255607 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.255712 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:33 crc kubenswrapper[4676]: E0124 00:04:33.255788 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:33 crc kubenswrapper[4676]: E0124 00:04:33.255903 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.258877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.258929 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.258943 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.258970 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.258983 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.285373 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:17:57.636103891 +0000 UTC Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.361183 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.361219 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.361229 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.361245 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.361258 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.464530 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.464598 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.464621 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.464655 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.464679 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.566842 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.566894 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.566911 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.566934 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.566952 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.670236 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.670299 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.670321 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.670350 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.670371 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.773109 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.773146 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.773154 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.773184 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.773195 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.875304 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.875364 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.875421 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.875443 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.875456 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.979101 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.979184 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.979201 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.979230 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:33 crc kubenswrapper[4676]: I0124 00:04:33.979248 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:33Z","lastTransitionTime":"2026-01-24T00:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.082536 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.082611 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.082623 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.082646 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.082659 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.185507 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.185548 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.185560 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.185580 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.185592 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.254851 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.254924 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:34 crc kubenswrapper[4676]: E0124 00:04:34.255444 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:34 crc kubenswrapper[4676]: E0124 00:04:34.255606 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.256005 4676 scope.go:117] "RemoveContainer" containerID="a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76" Jan 24 00:04:34 crc kubenswrapper[4676]: E0124 00:04:34.256252 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.285812 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:43:17.99114929 +0000 UTC Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.287427 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.287468 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.287482 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.287500 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.287511 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.390589 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.390662 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.390682 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.390708 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.390725 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.493905 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.493947 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.493958 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.493973 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.493983 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.597142 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.597214 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.597234 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.597263 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.597283 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.700102 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.700194 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.700213 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.700241 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.700258 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.803411 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.803502 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.803527 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.803607 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.803641 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.906926 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.906989 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.907013 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.907042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:34 crc kubenswrapper[4676]: I0124 00:04:34.907065 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:34Z","lastTransitionTime":"2026-01-24T00:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.010258 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.010322 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.010333 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.010349 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.010399 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.114172 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.114230 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.114248 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.114273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.114291 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.217448 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.217503 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.217515 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.217551 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.217564 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.254746 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.255186 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:35 crc kubenswrapper[4676]: E0124 00:04:35.255413 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:35 crc kubenswrapper[4676]: E0124 00:04:35.255543 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.286048 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:50:58.616939546 +0000 UTC Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.320354 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.320464 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.320484 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.320874 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.320933 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.424456 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.424591 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.424668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.424703 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.424944 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.528423 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.528553 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.528578 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.528604 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.528622 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.631756 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.631792 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.631800 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.631814 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.631824 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.734761 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.734820 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.734843 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.734873 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.734896 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.838198 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.838261 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.838281 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.838306 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.838326 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.942081 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.942143 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.942159 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.942185 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:35 crc kubenswrapper[4676]: I0124 00:04:35.942201 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:35Z","lastTransitionTime":"2026-01-24T00:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.045582 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.045708 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.045728 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.045752 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.045768 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.148201 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.148273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.148283 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.148298 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.148306 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.251032 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.251536 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.251554 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.251580 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.251597 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.255105 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:36 crc kubenswrapper[4676]: E0124 00:04:36.255501 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.255878 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:36 crc kubenswrapper[4676]: E0124 00:04:36.256123 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.271388 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.286699 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:49:04.533568041 +0000 UTC Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.288104 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.299577 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.312878 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.323785 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.334565 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.342161 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.353607 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.353635 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.353644 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.353656 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.353665 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.359933 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.372095 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.381983 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.393689 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.403199 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.414961 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.425011 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.433837 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.445683 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.455850 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.456626 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.456670 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.456681 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.456696 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.456706 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.468633 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:36Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.558546 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.558581 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.558593 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.558610 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.558621 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.774583 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.774626 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.774641 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.774659 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.774673 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.877547 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.877602 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.877618 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.877642 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.877659 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.979694 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.979732 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.979742 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.979757 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:36 crc kubenswrapper[4676]: I0124 00:04:36.979769 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:36Z","lastTransitionTime":"2026-01-24T00:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.082368 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.082424 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.082433 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.082449 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.082460 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.185794 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.185897 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.185920 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.185949 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.185972 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.255622 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.255755 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:37 crc kubenswrapper[4676]: E0124 00:04:37.255863 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:37 crc kubenswrapper[4676]: E0124 00:04:37.256074 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.287027 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:53:09.58734478 +0000 UTC Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.289311 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.289360 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.289405 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.289434 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.289454 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.392372 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.392922 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.392950 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.392982 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.393010 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.495702 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.495761 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.495782 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.495811 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.495832 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.599312 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.599365 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.599409 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.599439 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.599455 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.703021 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.703480 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.704020 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.704238 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.704475 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.807994 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.808047 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.808063 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.808087 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.808104 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.911031 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.911087 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.911108 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.911133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:37 crc kubenswrapper[4676]: I0124 00:04:37.911148 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:37Z","lastTransitionTime":"2026-01-24T00:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.013570 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.013631 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.013654 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.013681 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.013704 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.116241 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.116275 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.116286 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.116302 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.116313 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.219681 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.219742 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.219759 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.219782 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.219798 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.255291 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.255291 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:38 crc kubenswrapper[4676]: E0124 00:04:38.255478 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:38 crc kubenswrapper[4676]: E0124 00:04:38.255603 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.288788 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:19:27.004281216 +0000 UTC Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.321472 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.321493 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.321501 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.321512 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.321521 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.388586 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.388636 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.388652 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.388674 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.388692 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: E0124 00:04:38.407403 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:38Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.412228 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.412324 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.412341 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.412366 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.412412 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: E0124 00:04:38.430943 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:38Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.435758 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.435821 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.435841 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.435866 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.435883 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: E0124 00:04:38.454687 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:38Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.459712 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.459776 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.459802 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.459834 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.459858 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: E0124 00:04:38.480220 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:38Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.485222 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.485496 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.485699 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.485882 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.486036 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: E0124 00:04:38.505989 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:38Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:38 crc kubenswrapper[4676]: E0124 00:04:38.506739 4676 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.508508 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.508550 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.508562 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.508580 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.508592 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.610729 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.610761 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.610770 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.610782 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.610791 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.714431 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.714491 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.714509 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.714532 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.714548 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.817324 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.817415 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.817435 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.817458 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.817475 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.920617 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.920681 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.920698 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.920721 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:38 crc kubenswrapper[4676]: I0124 00:04:38.920739 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:38Z","lastTransitionTime":"2026-01-24T00:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.023926 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.024033 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.024062 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.024103 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.024130 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.127041 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.127339 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.127506 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.127730 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.128026 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.231331 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.231426 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.231441 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.231470 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.231487 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.254994 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.255002 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:39 crc kubenswrapper[4676]: E0124 00:04:39.255227 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:39 crc kubenswrapper[4676]: E0124 00:04:39.255320 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.289133 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:54:34.31283418 +0000 UTC Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.334310 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.334432 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.334461 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.334492 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.334511 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.437246 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.437299 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.437312 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.437329 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.437342 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.540726 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.540782 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.540794 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.540818 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.540832 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.643264 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.643316 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.643327 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.643344 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.643356 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.745878 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.745935 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.745955 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.745986 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.746009 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.848874 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.848944 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.848968 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.849026 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.849046 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.951585 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.951639 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.951656 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.951679 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:39 crc kubenswrapper[4676]: I0124 00:04:39.951696 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:39Z","lastTransitionTime":"2026-01-24T00:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.054540 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.054630 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.054657 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.054688 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.054710 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.157211 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.157267 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.157279 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.157300 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.157315 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.255700 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.255783 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:40 crc kubenswrapper[4676]: E0124 00:04:40.255919 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:40 crc kubenswrapper[4676]: E0124 00:04:40.256068 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.260682 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.260718 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.260726 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.260740 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.260751 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.289849 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:55:07.502831759 +0000 UTC Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.364466 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.364530 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.364542 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.364567 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.364580 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.583272 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.583308 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.583316 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.583332 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.583341 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.685258 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.685298 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.685308 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.685325 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.685338 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.791599 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.791668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.791690 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.791721 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.791742 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.893794 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.893829 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.893837 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.893850 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.893859 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.996053 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.996094 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.996105 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.996122 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:40 crc kubenswrapper[4676]: I0124 00:04:40.996158 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:40Z","lastTransitionTime":"2026-01-24T00:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.098337 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.098407 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.098424 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.098443 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.098459 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.200861 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.200899 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.200908 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.200921 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.200931 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.254739 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:41 crc kubenswrapper[4676]: E0124 00:04:41.254855 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.255032 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:41 crc kubenswrapper[4676]: E0124 00:04:41.255077 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.290356 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:35:56.043579597 +0000 UTC Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.305003 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.305037 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.305048 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.305068 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.305081 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.407292 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.407329 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.407341 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.407355 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.407365 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.510623 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.510665 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.510680 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.510701 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.510716 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.612943 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.612968 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.612975 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.612988 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.612997 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.715074 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.715114 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.715122 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.715137 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.715147 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.817161 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.817196 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.817209 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.817224 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.817237 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.919623 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.919683 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.919700 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.919725 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:41 crc kubenswrapper[4676]: I0124 00:04:41.919747 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:41Z","lastTransitionTime":"2026-01-24T00:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.022133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.022170 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.022182 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.022201 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.022213 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.124966 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.125013 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.125024 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.125043 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.125054 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.227854 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.227892 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.227900 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.227915 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.227924 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.255593 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.255659 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:42 crc kubenswrapper[4676]: E0124 00:04:42.255698 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:42 crc kubenswrapper[4676]: E0124 00:04:42.255856 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.291204 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:16:12.407153719 +0000 UTC Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.330340 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.330400 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.330410 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.330429 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.330440 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.331136 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:42 crc kubenswrapper[4676]: E0124 00:04:42.331338 4676 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:42 crc kubenswrapper[4676]: E0124 00:04:42.331463 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs podName:18335446-e572-4741-ad9e-e7aadee7550b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:14.331435898 +0000 UTC m=+98.361406939 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs") pod "network-metrics-daemon-r4q22" (UID: "18335446-e572-4741-ad9e-e7aadee7550b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.432518 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.432562 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.432571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.432590 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.432602 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.534148 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.534182 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.534190 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.534203 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.534212 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.636462 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.636497 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.636506 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.636522 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.636531 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.738872 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.738900 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.738909 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.738923 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.738932 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.793067 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/0.log" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.793115 4676 generic.go:334] "Generic (PLEG): container finished" podID="b88e9d2e-35da-45a8-ac7e-22afd660ff9f" containerID="db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c" exitCode=1 Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.793148 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x57xf" event={"ID":"b88e9d2e-35da-45a8-ac7e-22afd660ff9f","Type":"ContainerDied","Data":"db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.793481 4676 scope.go:117] "RemoveContainer" containerID="db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.804506 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.819704 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.829720 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.840789 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"2026-01-24T00:03:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf\\\\n2026-01-24T00:03:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf to /host/opt/cni/bin/\\\\n2026-01-24T00:03:57Z [verbose] multus-daemon started\\\\n2026-01-24T00:03:57Z [verbose] Readiness Indicator file check\\\\n2026-01-24T00:04:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.841094 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.841122 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.841132 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.841148 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.841158 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.854851 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.873369 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.884532 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.899388 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.911823 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.924238 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.942863 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.944098 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.944140 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.944339 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.944360 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.944411 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:42Z","lastTransitionTime":"2026-01-24T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.953662 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.964838 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.978858 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:42 crc kubenswrapper[4676]: I0124 00:04:42.989524 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:42Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.011803 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.025240 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.040079 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.047467 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.047501 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.047511 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.047526 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.047536 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.150691 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.150728 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.150737 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.150752 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.150761 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.253427 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.253489 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.253506 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.253530 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.253547 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.255555 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.255597 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:43 crc kubenswrapper[4676]: E0124 00:04:43.255645 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:43 crc kubenswrapper[4676]: E0124 00:04:43.255719 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.291761 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:06:34.02742757 +0000 UTC Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.355719 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.355760 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.355774 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.355792 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.355804 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.457947 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.458007 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.458018 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.458033 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.458044 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.559836 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.559898 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.559915 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.559937 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.559954 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.662233 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.662275 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.662284 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.662300 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.662310 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.764570 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.764610 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.764621 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.764640 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.764650 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.799161 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/0.log" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.799466 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x57xf" event={"ID":"b88e9d2e-35da-45a8-ac7e-22afd660ff9f","Type":"ContainerStarted","Data":"cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.813176 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.823810 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.848669 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.862490 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.867090 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.867149 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.867166 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.867189 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.867206 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.876874 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.891087 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.905694 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.918726 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.944324 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.957233 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.970573 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.970611 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.970625 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.970643 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.970656 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:43Z","lastTransitionTime":"2026-01-24T00:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.971663 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:43 crc kubenswrapper[4676]: I0124 00:04:43.984389 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.000161 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:43Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.018186 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:44Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.037243 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"2026-01-24T00:03:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf\\\\n2026-01-24T00:03:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf to /host/opt/cni/bin/\\\\n2026-01-24T00:03:57Z [verbose] multus-daemon started\\\\n2026-01-24T00:03:57Z [verbose] Readiness Indicator file check\\\\n2026-01-24T00:04:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:44Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.053600 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:44Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.070200 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:44Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.073073 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.073125 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.073144 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.073170 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.073189 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.085543 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:44Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.176296 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.176642 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.176810 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.176960 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.177108 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.254717 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:44 crc kubenswrapper[4676]: E0124 00:04:44.254901 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.255296 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:44 crc kubenswrapper[4676]: E0124 00:04:44.255858 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.292282 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:51:55.260539834 +0000 UTC Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.312921 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.312971 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.312984 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.313003 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.313017 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.415626 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.415836 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.415895 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.415969 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.416029 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.518367 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.518431 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.518443 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.518462 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.518474 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.620684 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.620719 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.620730 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.620747 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.620758 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.723398 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.723428 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.723436 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.723453 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.723460 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.825103 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.825423 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.825512 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.825621 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.825703 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.928478 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.928569 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.928582 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.928603 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:44 crc kubenswrapper[4676]: I0124 00:04:44.928617 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:44Z","lastTransitionTime":"2026-01-24T00:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.031441 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.031503 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.031522 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.031543 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.031557 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.134213 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.134248 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.134258 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.134273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.134282 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.236501 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.236529 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.236539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.236550 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.236558 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.255264 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.255313 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:45 crc kubenswrapper[4676]: E0124 00:04:45.255443 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:45 crc kubenswrapper[4676]: E0124 00:04:45.255588 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.293785 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:23:31.889915031 +0000 UTC Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.338350 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.338376 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.338384 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.338397 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.338405 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.440678 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.440707 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.440718 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.440733 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.440743 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.543055 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.543097 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.543114 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.543137 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.543156 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.645807 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.645840 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.645852 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.645870 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.645884 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.748035 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.748063 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.748071 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.748086 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.748096 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.850815 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.850875 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.850892 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.851143 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.851164 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.954534 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.954677 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.954700 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.954729 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:45 crc kubenswrapper[4676]: I0124 00:04:45.954752 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:45Z","lastTransitionTime":"2026-01-24T00:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.057777 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.057816 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.057825 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.057840 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.057851 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.159942 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.159985 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.159997 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.160018 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.160034 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.255925 4676 scope.go:117] "RemoveContainer" containerID="a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.256356 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:46 crc kubenswrapper[4676]: E0124 00:04:46.256438 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.256610 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:46 crc kubenswrapper[4676]: E0124 00:04:46.256689 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.265982 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.266098 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.266172 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.266250 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.266316 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.273824 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.294782 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:45:54.708886724 +0000 UTC Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.297022 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.316667 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.328095 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.356109 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.369131 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.369164 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.369174 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.369187 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.369197 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.371008 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.382150 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.393851 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.407345 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.424043 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.438161 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"2026-01-24T00:03:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf\\\\n2026-01-24T00:03:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf to /host/opt/cni/bin/\\\\n2026-01-24T00:03:57Z [verbose] multus-daemon started\\\\n2026-01-24T00:03:57Z [verbose] Readiness Indicator file check\\\\n2026-01-24T00:04:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.451402 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.463880 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.470947 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.471123 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.471234 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.471348 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.471445 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.478309 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.494583 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.508500 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.528981 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.540284 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.573360 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.573572 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.573658 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.573720 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.573775 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.676080 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.676327 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.676439 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.676524 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.676608 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.778940 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.778983 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.778993 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.779009 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.779018 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.810896 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/2.log" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.815506 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.816251 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.827003 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.843943 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.859192 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.870291 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.884742 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.884791 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.884803 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.884821 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.884839 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.888796 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.905030 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.921750 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.936364 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.951224 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.961535 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.972543 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"2026-01-24T00:03:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf\\\\n2026-01-24T00:03:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf to /host/opt/cni/bin/\\\\n2026-01-24T00:03:57Z [verbose] multus-daemon started\\\\n2026-01-24T00:03:57Z [verbose] Readiness Indicator file check\\\\n2026-01-24T00:04:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.984315 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.986588 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.986620 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.986629 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.986643 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.986653 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:46Z","lastTransitionTime":"2026-01-24T00:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:46 crc kubenswrapper[4676]: I0124 00:04:46.997712 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:46Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.008174 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.019135 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.029319 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.046683 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.058331 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.088604 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.088630 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.088638 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.088650 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.088659 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.190993 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.191047 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.191065 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.191088 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.191104 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.254816 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.254816 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:47 crc kubenswrapper[4676]: E0124 00:04:47.254939 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:47 crc kubenswrapper[4676]: E0124 00:04:47.255015 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.293050 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.293084 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.293092 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.293106 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.293115 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.295428 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:55:12.077624838 +0000 UTC Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.394627 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.394672 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.394683 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.394700 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.394711 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.496798 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.496833 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.496841 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.496854 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.496864 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.599530 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.599571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.599579 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.599596 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.599606 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.701066 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.701106 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.701114 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.701127 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.701136 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.803655 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.803720 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.803738 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.803771 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.803802 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.819877 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/3.log" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.820506 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/2.log" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.823858 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9" exitCode=1 Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.823896 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.823959 4676 scope.go:117] "RemoveContainer" containerID="a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.824563 4676 scope.go:117] "RemoveContainer" containerID="dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9" Jan 24 00:04:47 crc kubenswrapper[4676]: E0124 00:04:47.824734 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.840423 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.852466 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.877004 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59fa26ff296d45c03abf7bef737940a6dadc3ec78d1b4e3b43b52a803530a76\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:21Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:20.427464 6215 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 00:04:20.427560 6215 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 00:04:20.427590 6215 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0124 00:04:20.427626 6215 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0124 00:04:20.427633 6215 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0124 00:04:20.427665 6215 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 00:04:20.427671 6215 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0124 00:04:20.427693 6215 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0124 00:04:20.427707 6215 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0124 00:04:20.427719 6215 handler.go:208] Removed *v1.Node event handler 2\\\\nI0124 00:04:20.427714 6215 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0124 00:04:20.427729 6215 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0124 00:04:20.427751 6215 factory.go:656] Stopping watch factory\\\\nI0124 00:04:20.427768 6215 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:20.427795 6215 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0124 00:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:47Z\\\",\\\"message\\\":\\\" reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.034512 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035026 6579 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035147 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035214 6579 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 00:04:47.035267 6579 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 00:04:47.040579 6579 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0124 00:04:47.040623 6579 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0124 00:04:47.040714 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:47.040756 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0124 00:04:47.040876 6579 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.891329 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.906077 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.906117 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.906127 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.906143 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.906152 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:47Z","lastTransitionTime":"2026-01-24T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.919167 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.931990 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.942798 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.953253 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.965084 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.976633 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.987368 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"2026-01-24T00:03:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf\\\\n2026-01-24T00:03:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf to /host/opt/cni/bin/\\\\n2026-01-24T00:03:57Z [verbose] multus-daemon started\\\\n2026-01-24T00:03:57Z [verbose] Readiness Indicator file check\\\\n2026-01-24T00:04:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:47 crc kubenswrapper[4676]: I0124 00:04:47.997875 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:47Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.008091 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.008133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.008144 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.008160 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.008171 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.014151 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.030880 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.041274 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.054350 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.069745 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.081509 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.110288 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.110315 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.110323 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.110336 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.110344 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.213178 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.213245 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.213268 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.213295 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.213317 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.254718 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.254718 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.254956 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.255076 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.295976 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:11:38.740752422 +0000 UTC Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.315669 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.315714 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.315728 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.315747 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.315761 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.417313 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.417392 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.417405 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.417420 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.417431 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.520050 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.520100 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.520112 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.520130 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.520145 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.622482 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.622536 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.622551 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.622572 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.622587 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.666220 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.666274 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.666284 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.666303 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.666317 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.677630 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.680935 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.680955 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.680963 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.680977 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.680989 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.694367 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.697935 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.697958 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.697968 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.697982 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.697992 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.708645 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.712337 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.712382 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.712392 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.712424 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.712435 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.724429 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.728305 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.728338 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.728348 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.728364 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.728376 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.740310 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.740445 4676 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.741825 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.741850 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.741861 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.741872 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.741881 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.828064 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/3.log" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.832003 4676 scope.go:117] "RemoveContainer" containerID="dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9" Jan 24 00:04:48 crc kubenswrapper[4676]: E0124 00:04:48.832266 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.843747 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.843769 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.843778 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.843792 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.843801 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.847045 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.864537 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.876835 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.890218 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"2026-01-24T00:03:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf\\\\n2026-01-24T00:03:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf to /host/opt/cni/bin/\\\\n2026-01-24T00:03:57Z [verbose] multus-daemon started\\\\n2026-01-24T00:03:57Z [verbose] Readiness Indicator file check\\\\n2026-01-24T00:04:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.906154 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.925073 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.939845 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.945973 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.946015 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.946042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.946060 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.946070 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:48Z","lastTransitionTime":"2026-01-24T00:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.955136 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.971221 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:48 crc kubenswrapper[4676]: I0124 00:04:48.983181 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:48Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.004241 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:47Z\\\",\\\"message\\\":\\\" reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.034512 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035026 6579 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035147 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035214 6579 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 00:04:47.035267 6579 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 00:04:47.040579 6579 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0124 00:04:47.040623 6579 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0124 00:04:47.040714 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:47.040756 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0124 00:04:47.040876 6579 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:49Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.016541 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:49Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.032178 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:49Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.053909 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.053945 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.053963 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.053980 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.053992 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.054200 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:49Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.071575 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:49Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.101156 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:49Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.115733 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:49Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.134865 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:49Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.156879 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.156920 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.156929 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.156945 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.156955 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.255574 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.255619 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:49 crc kubenswrapper[4676]: E0124 00:04:49.255714 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:49 crc kubenswrapper[4676]: E0124 00:04:49.255871 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.259039 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.259071 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.259082 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.259096 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.259107 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.296147 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:42:39.89309252 +0000 UTC Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.372282 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.372315 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.372326 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.372344 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.372356 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.475022 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.475100 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.475124 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.475157 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.475181 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.578031 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.578066 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.578075 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.578091 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.578100 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.680887 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.680936 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.680954 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.680977 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.680993 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.783848 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.783904 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.783920 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.783945 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.783967 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.886952 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.887019 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.887036 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.887064 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.887084 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.990828 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.990888 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.990906 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.990931 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:49 crc kubenswrapper[4676]: I0124 00:04:49.990948 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:49Z","lastTransitionTime":"2026-01-24T00:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.093293 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.093336 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.093348 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.093366 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.093402 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.195874 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.195934 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.195951 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.195979 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.195996 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.254824 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.254835 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:50 crc kubenswrapper[4676]: E0124 00:04:50.254938 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:50 crc kubenswrapper[4676]: E0124 00:04:50.255075 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.296964 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:58:47.965914184 +0000 UTC Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.299670 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.299727 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.299744 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.299768 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.299785 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.402178 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.402224 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.402235 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.402250 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.402260 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.505169 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.505207 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.505217 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.505233 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.505244 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.608811 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.608869 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.608890 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.608918 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.608939 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.712613 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.712683 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.712704 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.712736 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.712761 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.820450 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.820504 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.820523 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.820549 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.820567 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.922825 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.922866 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.922877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.922892 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:50 crc kubenswrapper[4676]: I0124 00:04:50.922903 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:50Z","lastTransitionTime":"2026-01-24T00:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.025537 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.025595 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.025612 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.025636 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.025653 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.127701 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.127746 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.127757 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.127774 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.127788 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.230750 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.230815 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.230833 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.230859 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.230876 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.255459 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:51 crc kubenswrapper[4676]: E0124 00:04:51.255623 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.255468 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:51 crc kubenswrapper[4676]: E0124 00:04:51.255878 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.297609 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:49:26.616841929 +0000 UTC Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.334493 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.334564 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.334586 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.334617 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.334636 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.437064 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.437493 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.437519 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.437542 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.437559 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.540529 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.540565 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.540576 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.540590 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.540619 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.643599 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.643938 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.644092 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.644245 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.644425 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.747876 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.747932 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.747943 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.747961 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.747973 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.850706 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.851091 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.851171 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.851266 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.851356 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.954533 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.954917 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.954996 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.955068 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:51 crc kubenswrapper[4676]: I0124 00:04:51.955142 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:51Z","lastTransitionTime":"2026-01-24T00:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.058792 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.058857 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.058877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.058905 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.058924 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.162941 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.163324 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.163412 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.163510 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.163613 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.255616 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.255729 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:52 crc kubenswrapper[4676]: E0124 00:04:52.256495 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:52 crc kubenswrapper[4676]: E0124 00:04:52.256707 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.265519 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.265762 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.265901 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.266050 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.266338 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.297827 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:46:17.728723874 +0000 UTC Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.369756 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.369796 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.369807 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.369824 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.369835 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.473020 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.473074 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.473086 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.473106 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.473119 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.577657 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.577726 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.577746 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.577772 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.577791 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.680504 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.681004 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.681201 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.681334 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.681513 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.784684 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.784742 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.784759 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.784786 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.784804 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.887693 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.887762 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.887779 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.887804 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.887822 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.990557 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.991021 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.991334 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.991669 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:52 crc kubenswrapper[4676]: I0124 00:04:52.991811 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:52Z","lastTransitionTime":"2026-01-24T00:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.094630 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.094901 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.095048 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.095192 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.095308 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.198133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.198442 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.198588 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.198733 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.198873 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.255729 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:53 crc kubenswrapper[4676]: E0124 00:04:53.255903 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.255749 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:53 crc kubenswrapper[4676]: E0124 00:04:53.256343 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.298819 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:02:20.73729298 +0000 UTC Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.301625 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.301820 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.302217 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.302457 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.302890 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.405999 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.406249 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.406475 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.406658 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.406799 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.508752 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.509430 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.509906 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.510407 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.510595 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.613309 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.613394 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.613414 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.613445 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.613469 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.715952 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.716002 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.716011 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.716025 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.716034 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.819066 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.819124 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.819141 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.819165 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.819182 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.922578 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.922648 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.922669 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.922698 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:53 crc kubenswrapper[4676]: I0124 00:04:53.923149 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:53Z","lastTransitionTime":"2026-01-24T00:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.027087 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.027135 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.027146 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.027165 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.027176 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.130267 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.130399 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.130420 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.130445 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.130463 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.233237 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.233326 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.233343 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.233368 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.233417 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.255083 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:54 crc kubenswrapper[4676]: E0124 00:04:54.255330 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.255086 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:54 crc kubenswrapper[4676]: E0124 00:04:54.255663 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.299284 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:08:47.956276261 +0000 UTC Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.335827 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.335901 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.335919 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.335945 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.335963 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.439634 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.439682 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.439980 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.440006 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.440023 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.543333 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.543461 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.543488 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.543517 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.543538 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.646465 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.646531 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.646549 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.646573 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.646590 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.749742 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.749800 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.749816 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.749839 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.749856 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.852336 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.852433 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.852454 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.852479 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.852497 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.957324 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.957444 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.957470 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.957500 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:54 crc kubenswrapper[4676]: I0124 00:04:54.957522 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:54Z","lastTransitionTime":"2026-01-24T00:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.061029 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.061082 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.061098 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.061119 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.061135 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.164095 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.164141 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.164159 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.164182 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.164199 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.254959 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.254974 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:55 crc kubenswrapper[4676]: E0124 00:04:55.255179 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:55 crc kubenswrapper[4676]: E0124 00:04:55.255259 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.267299 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.267368 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.267437 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.267477 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.267501 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.300014 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:38:12.187481517 +0000 UTC Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.370242 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.370284 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.370297 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.370314 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.370325 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.473279 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.473338 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.473356 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.473378 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.473422 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.575770 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.575819 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.575831 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.575846 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.575856 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.677740 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.677779 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.677789 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.677805 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.677817 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.780127 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.780172 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.780190 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.780213 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.780231 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.883170 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.883301 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.883328 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.883356 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.883383 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.985857 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.985900 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.985911 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.985929 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:55 crc kubenswrapper[4676]: I0124 00:04:55.985942 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:55Z","lastTransitionTime":"2026-01-24T00:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.089454 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.089502 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.089519 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.089541 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.089557 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.192454 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.192514 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.192594 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.192627 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.192649 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.259191 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:56 crc kubenswrapper[4676]: E0124 00:04:56.260077 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.259696 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:56 crc kubenswrapper[4676]: E0124 00:04:56.260450 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.281984 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.298133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.298398 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.298503 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.298588 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.298675 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.300492 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:26:26.330485113 +0000 UTC Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.303758 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.328810 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"2026-01-24T00:03:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf\\\\n2026-01-24T00:03:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf to /host/opt/cni/bin/\\\\n2026-01-24T00:03:57Z [verbose] multus-daemon started\\\\n2026-01-24T00:03:57Z [verbose] Readiness Indicator file check\\\\n2026-01-24T00:04:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.352919 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.373800 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.389990 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.401634 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.401667 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.401679 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.401693 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.401797 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.401759 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.410653 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.420077 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.428653 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.449135 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:47Z\\\",\\\"message\\\":\\\" reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.034512 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035026 6579 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035147 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035214 6579 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 00:04:47.035267 6579 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 00:04:47.040579 6579 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0124 00:04:47.040623 6579 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0124 00:04:47.040714 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:47.040756 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0124 00:04:47.040876 6579 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.460071 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.475952 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.487907 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.504313 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.504395 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.504408 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.504424 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.504435 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.507602 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.522664 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.534241 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.548920 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:56Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.608345 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.608464 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.608493 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.608525 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.608550 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.712159 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.712192 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.712205 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.712221 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.712232 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.840045 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.840083 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.840095 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.840111 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.840122 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.942258 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.942295 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.942305 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.942322 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:56 crc kubenswrapper[4676]: I0124 00:04:56.942334 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:56Z","lastTransitionTime":"2026-01-24T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.045373 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.045452 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.045469 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.045492 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.045508 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.148569 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.148635 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.148656 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.148685 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.148739 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.253054 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.253086 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.253095 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.253109 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.253118 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.255509 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.255583 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:57 crc kubenswrapper[4676]: E0124 00:04:57.255622 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:57 crc kubenswrapper[4676]: E0124 00:04:57.255731 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.301217 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:09:25.171810413 +0000 UTC Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.356076 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.356435 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.356660 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.356854 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.357044 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.460458 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.460500 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.460517 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.460539 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.460555 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.563133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.563206 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.563229 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.563257 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.563278 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.666220 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.666284 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.666302 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.666330 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.666346 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.768752 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.768808 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.768830 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.768858 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.768881 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.871527 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.871587 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.871609 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.871636 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.871658 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.974497 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.974558 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.974576 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.974600 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:57 crc kubenswrapper[4676]: I0124 00:04:57.974619 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:57Z","lastTransitionTime":"2026-01-24T00:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.077164 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.077801 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.078005 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.078107 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.078197 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.180219 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.180259 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.180269 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.180285 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.180295 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.254768 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:58 crc kubenswrapper[4676]: E0124 00:04:58.254983 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.254782 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:04:58 crc kubenswrapper[4676]: E0124 00:04:58.255365 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.283069 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.283119 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.283136 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.283161 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.283177 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.301589 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:18:23.640100867 +0000 UTC Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.386577 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.386602 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.386609 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.386623 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.386631 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.489819 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.490222 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.490445 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.490674 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.490874 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.593653 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.594042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.594227 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.594485 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.594688 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.697631 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.698068 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.698223 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.698372 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.698564 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.802028 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.802101 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.802131 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.802162 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.802184 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.866266 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.866322 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.866339 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.866362 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.866411 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: E0124 00:04:58.887830 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.893355 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.893437 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.893455 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.893479 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.893497 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: E0124 00:04:58.916271 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.921669 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.921720 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.921738 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.921762 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.921781 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: E0124 00:04:58.942416 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.954615 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.954678 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.954697 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.954723 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.954741 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:58 crc kubenswrapper[4676]: E0124 00:04:58.976479 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.981749 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.981801 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.981819 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.981848 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:58 crc kubenswrapper[4676]: I0124 00:04:58.981867 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:58Z","lastTransitionTime":"2026-01-24T00:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.001776 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:04:58Z is after 2025-08-24T17:21:41Z" Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.002107 4676 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.004435 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.004495 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.004512 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.004537 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.004553 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.107218 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.107268 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.107293 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.107323 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.107345 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.210819 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.210885 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.210908 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.210934 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.210954 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.221348 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.221523 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.221494725 +0000 UTC m=+147.251465756 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.254666 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.254816 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.255054 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.255149 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.271050 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.302625 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:42:36.806324763 +0000 UTC Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.314859 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.314914 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.314938 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.314967 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.314989 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.322825 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.322889 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.323001 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323056 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323088 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323107 4676 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323168 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.32314604 +0000 UTC m=+147.353117081 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323170 4676 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323218 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.323206302 +0000 UTC m=+147.353177343 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.323052 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323279 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323308 4676 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323330 4676 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323437 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.323412968 +0000 UTC m=+147.353384000 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323516 4676 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:04:59 crc kubenswrapper[4676]: E0124 00:04:59.323578 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.323558903 +0000 UTC m=+147.353529944 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.418243 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.418298 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.418317 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.418344 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.418361 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.522136 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.522193 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.522209 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.522233 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.522250 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.624989 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.625080 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.625098 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.625120 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.625138 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.727646 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.727709 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.727728 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.727754 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.727777 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.831111 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.831146 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.831157 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.831174 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.831187 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.934519 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.934801 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.934825 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.934857 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:04:59 crc kubenswrapper[4676]: I0124 00:04:59.934879 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:04:59Z","lastTransitionTime":"2026-01-24T00:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.038230 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.038288 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.038305 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.038327 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.038342 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.142724 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.142775 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.142792 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.142821 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.142838 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.245822 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.245867 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.245881 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.245901 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.245914 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.255672 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.255720 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:00 crc kubenswrapper[4676]: E0124 00:05:00.256112 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:00 crc kubenswrapper[4676]: E0124 00:05:00.256319 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.304429 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:42:23.399296239 +0000 UTC Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.349416 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.349503 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.349525 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.349553 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.349574 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.453194 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.453255 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.453273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.453296 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.453315 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.555612 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.555708 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.555733 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.555761 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.555784 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.659018 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.659072 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.659090 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.659117 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.659134 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.761951 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.762027 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.762045 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.762070 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.762086 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.864847 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.864915 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.864928 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.864943 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.864954 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.967286 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.967362 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.967414 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.967444 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:00 crc kubenswrapper[4676]: I0124 00:05:00.967462 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:00Z","lastTransitionTime":"2026-01-24T00:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.070118 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.070150 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.070158 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.070184 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.070193 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.173446 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.173505 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.173524 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.173548 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.173565 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.255863 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:01 crc kubenswrapper[4676]: E0124 00:05:01.256120 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.257213 4676 scope.go:117] "RemoveContainer" containerID="dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9" Jan 24 00:05:01 crc kubenswrapper[4676]: E0124 00:05:01.257555 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.257631 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:01 crc kubenswrapper[4676]: E0124 00:05:01.257744 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.277719 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.277768 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.277786 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.277810 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.277828 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.305262 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:28:02.044116927 +0000 UTC Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.380676 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.380745 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.380763 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.380787 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.380805 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.483273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.483336 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.483353 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.483404 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.483424 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.586106 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.586221 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.586245 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.586276 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.586295 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.689464 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.689519 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.689536 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.689558 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.689577 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.792220 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.792273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.792290 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.792313 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.792329 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.894570 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.894615 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.894631 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.894652 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.894670 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.997955 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.998018 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.998040 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.998068 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:01 crc kubenswrapper[4676]: I0124 00:05:01.998091 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:01Z","lastTransitionTime":"2026-01-24T00:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.101096 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.101166 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.101191 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.101222 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.101239 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.204084 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.204170 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.204195 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.204223 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.204246 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.255467 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.255508 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:02 crc kubenswrapper[4676]: E0124 00:05:02.255840 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:02 crc kubenswrapper[4676]: E0124 00:05:02.255952 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.305407 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:50:39.664224188 +0000 UTC Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.306983 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.307034 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.307053 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.307074 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.307089 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.410317 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.410406 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.410432 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.410462 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.410485 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.513702 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.513772 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.513786 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.513810 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.513825 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.616975 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.617022 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.617037 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.617059 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.617075 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.719594 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.719648 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.719678 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.719704 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.719720 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.823018 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.823185 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.823212 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.823243 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.823264 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.925757 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.925817 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.925835 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.925866 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:02 crc kubenswrapper[4676]: I0124 00:05:02.925884 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:02Z","lastTransitionTime":"2026-01-24T00:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.030121 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.030197 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.030217 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.030251 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.030273 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.132986 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.133051 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.133070 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.133094 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.133112 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.236202 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.236338 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.236367 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.236439 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.236466 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.255319 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.255436 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:03 crc kubenswrapper[4676]: E0124 00:05:03.255527 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:03 crc kubenswrapper[4676]: E0124 00:05:03.255643 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.306078 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:39:25.845784725 +0000 UTC Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.340368 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.340455 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.340494 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.340526 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.340548 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.443038 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.443108 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.443137 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.443167 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.443191 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.545629 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.545759 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.545784 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.545814 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.545840 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.648577 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.648628 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.648645 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.648666 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.648683 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.751561 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.751592 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.751601 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.751616 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.751627 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.854242 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.854408 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.854440 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.854500 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.854519 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.957636 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.957685 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.957697 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.957713 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:03 crc kubenswrapper[4676]: I0124 00:05:03.957724 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:03Z","lastTransitionTime":"2026-01-24T00:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.060552 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.060608 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.060621 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.060641 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.060654 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.163615 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.163701 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.163723 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.163753 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.163778 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.255909 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.256162 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:04 crc kubenswrapper[4676]: E0124 00:05:04.256403 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:04 crc kubenswrapper[4676]: E0124 00:05:04.256563 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.266517 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.266667 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.266692 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.266718 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.266736 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.307149 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:21:48.503218661 +0000 UTC Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.369663 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.369764 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.369798 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.369839 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.369866 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.472911 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.472958 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.472975 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.472998 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.473017 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.576988 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.577050 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.577062 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.577085 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.577100 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.679203 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.679247 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.679257 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.679277 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.679287 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.782905 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.783006 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.783074 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.783107 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.783127 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.885789 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.885854 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.885871 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.885898 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.885916 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.988063 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.988101 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.988113 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.988128 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:04 crc kubenswrapper[4676]: I0124 00:05:04.988138 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:04Z","lastTransitionTime":"2026-01-24T00:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.091791 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.091859 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.091883 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.091912 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.091931 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.194177 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.194257 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.194281 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.194314 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.194337 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.254969 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:05 crc kubenswrapper[4676]: E0124 00:05:05.255143 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.255203 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:05 crc kubenswrapper[4676]: E0124 00:05:05.255711 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.296646 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.296705 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.296723 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.296749 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.296768 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.308112 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:12:05.688953167 +0000 UTC Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.399488 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.399556 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.399577 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.399603 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.399622 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.502197 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.502248 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.502265 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.502288 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.502309 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.604784 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.604829 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.604846 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.604868 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.604885 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.707530 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.707626 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.707645 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.707668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.707685 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.810043 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.810094 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.810113 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.810135 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.810152 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.913595 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.913646 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.913662 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.913686 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:05 crc kubenswrapper[4676]: I0124 00:05:05.913705 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:05Z","lastTransitionTime":"2026-01-24T00:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.016593 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.016653 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.016671 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.016695 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.016715 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.120487 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.120552 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.120571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.120599 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.120616 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.223542 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.223629 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.223648 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.223671 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.223687 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.255279 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:06 crc kubenswrapper[4676]: E0124 00:05:06.255468 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.255709 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:06 crc kubenswrapper[4676]: E0124 00:05:06.255826 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.279735 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f0dc26-0857-430f-aebd-073fcfcc1c0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:47Z\\\",\\\"message\\\":\\\" reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.034512 6579 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035026 6579 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035147 6579 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0124 00:04:47.035214 6579 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 00:04:47.035267 6579 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 00:04:47.040579 6579 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0124 00:04:47.040623 6579 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0124 00:04:47.040714 6579 ovnkube.go:599] Stopped ovnkube\\\\nI0124 00:04:47.040756 6579 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0124 00:04:47.040876 6579 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4frqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ld569\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.298080 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40151406-46c7-4668-8b2b-db0585847be9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d935e1a0b95e7b7bdbd9c5299727d3f056f62ab78b0062468dac8a66196e023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7097d31bd127d1e68680dfec923eecc06e9a43f0cf00153752e237b0c013d39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7m8ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.309263 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:58:54.06456004 +0000 UTC Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.318164 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.326733 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.326775 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.326791 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.326814 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.326831 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.341094 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ad333b-cf18-4ba3-b9d4-2f89c7c44354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b6909340c11df3cad3b601def65f1a29ff042dd39375eb985c8c8e29442cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fd71005d4fd7fe142e31233e3e9aef36b280e5f7531c46df616bba8ef261cbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec4e1df33b002745ef1956312b8f1e0ef6b041fa7c8641cc718d26ffc545ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6588cf5ad649e704406a3c7d0d036d9913a13d0bdd14f726c7d3026997c4ace5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb6e7cff762c845ff9c43e0b072eba7c653b02703f8c1f3a564000822e5af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5afe4e2621897256e5a3c88f4d62db565155cf11abdf0c0de27b25ae80c03d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e578c3d1b3899861d7c1a717a90666abc21e6fc257e211f31c41cd2b966f239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:04:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhp4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ppmcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.354337 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4bcxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc086f6b-af67-49e4-97c8-f8b70f19e49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10de997ad103241d497848f2236116ef34a903e35825e3d55f4a587c040a577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzmhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4bcxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.400294 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fcc1e5b-d0aa-4b28-ab66-3bbdc465b321\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e954f4e5a3227b78bdbcf9adfd78ae7587f4edcd2d7eba76da5dcf3e8a0c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://244c5e1c2187811c5b1b53000c2ac14fb97aaa7e1479925bebbe1a5aa154831f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://687f0acf340276893b88aed014fefe90fd67168d72bd0e19af64840356261e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2de4a5c6ab353cee1b62449520d5300bc915036e53ab7a40be7c8f80e2264753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4119ad5bdb9f5da977cc828824b870ee85b49100185d693c72c91d4b5f8d0b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89555cc4f831d0a25b05003527780e3a2d285fdda064190c2a50afcb1bccbdd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89aa3db6cf48560fcfbbff3e2de953d0d1fe65f6ae638ecc10bc8251cb445fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd1e669a65eace389a63d22dfc020461db7f58ad0e9f3e51f618d930d762ace9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.416767 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a1030649d4c0733154c0864f97c0b26d129607d34d94a65ba69a9084f04d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.429849 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.429882 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.429894 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.429911 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.429923 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.438555 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e68d183ad891d28a3bac2ae2e2b2f878b3c1a708d657cc7e3111626157ae90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f28a03a7763ed7498fc7272f1ed4ab6be13b2ff941bc606b386a1cf7568d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.458943 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f036afbd-252a-4ed3-88e6-46256da87940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1880c78addfa5865cfdb73ac1d2965ff8142978ac0814615ea0d6ecb005f5847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4342a165126bd52a03ab2a8ac09666d08d16d3b8034de7b6be1ef02506798c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7ee0b4dfd54ec0a33df18eba05dbd234ef0ed39fe66b05ee5d8254614955fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8c5cccdbbac0a338bd0c05acb0ffb20179ca7413df27cc499ab2fbfc9451d51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.473247 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.490102 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.507955 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x57xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88e9d2e-35da-45a8-ac7e-22afd660ff9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T00:04:42Z\\\",\\\"message\\\":\\\"2026-01-24T00:03:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf\\\\n2026-01-24T00:03:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1d5c5ee-6cad-41d4-9283-93b019ff77bf to /host/opt/cni/bin/\\\\n2026-01-24T00:03:57Z [verbose] multus-daemon started\\\\n2026-01-24T00:03:57Z [verbose] Readiness Indicator file check\\\\n2026-01-24T00:04:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67bbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x57xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.524607 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd647b0d-6d3d-432d-81ac-6484a2948211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10947bd839858a55b7b098d2a83f3539d2000c9e32bef961d1e3b418516afbbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9vrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mzrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.532660 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.532756 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.532805 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.532836 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.532854 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.539794 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad2d146-bbd6-4720-8eec-7644ef2c0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04c7475f668593c7097dbf2dd1453baa25cff2333367eadc62a1124a240dfe05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e40d7ba93858915cd80a18bfee202c1e6b2672cd41eff2441d6d5178d98e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e40d7ba93858915cd80a18bfee202c1e6b2672cd41eff2441d6d5178d98e1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.560544 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653e6c74-9f8e-4c5f-b101-5b8da2e962ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T00:03:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 00:03:48.618772 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 00:03:48.623114 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018791735/tls.crt::/tmp/serving-cert-1018791735/tls.key\\\\\\\"\\\\nI0124 00:03:54.397485 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 00:03:54.405962 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 00:03:54.405983 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 00:03:54.406004 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 00:03:54.406008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 00:03:54.413619 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 00:03:54.413647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 00:03:54.413654 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0124 00:03:54.413652 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 00:03:54.413659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0124 00:03:54.413676 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 00:03:54.413680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 00:03:54.413684 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 00:03:54.415845 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T00:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.579852 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ef6c70c-58da-4218-be4c-8a1d15f72b06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74bb1b0407748b9f3b691a7fad9b13b58992e3688169fda4422379be523ab02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b4a29a22859cdb13f508e7fbc10d00784a61df558cb6cb84079591e7184bf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca67c8957fa68d0167bb1892013f5a9447528a241a81c7b0626e256454edd75\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.595670 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r4q22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18335446-e572-4741-ad9e-e7aadee7550b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:04:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tsw85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:04:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r4q22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.614450 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c741baa2e67719ad071fb354e213c74b40c67ff72c8c96ec612148344f07413b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.631193 4676 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5dg9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe79b06-a59d-4d3c-9161-839d4e60fb52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T00:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fef8a199be8ea38c412591af86a6bd9b703bce2a0662a8a61b10ffcb42b17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T00:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cht5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T00:03:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5dg9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:06Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.635203 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.635253 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.635270 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.635295 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.635313 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.738364 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.738451 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.738470 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.738495 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.738515 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.840856 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.840913 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.840929 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.840963 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.840980 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.943432 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.943498 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.943516 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.943540 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:06 crc kubenswrapper[4676]: I0124 00:05:06.943559 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:06Z","lastTransitionTime":"2026-01-24T00:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.046162 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.046231 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.046254 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.046281 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.046298 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.149075 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.149125 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.149144 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.149344 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.149446 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.252342 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.252437 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.252455 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.252479 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.252498 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.255089 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.255127 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:07 crc kubenswrapper[4676]: E0124 00:05:07.255238 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:07 crc kubenswrapper[4676]: E0124 00:05:07.255341 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.310253 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:41:19.374833952 +0000 UTC Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.355024 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.355083 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.355106 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.355133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.355154 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.458357 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.458600 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.458626 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.458651 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.458668 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.561775 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.561868 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.561893 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.561926 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.561950 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.665134 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.665215 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.665233 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.665265 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.665289 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.767801 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.767883 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.767907 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.767940 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.767963 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.870817 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.870875 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.870892 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.870915 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.870931 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.973901 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.973973 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.973997 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.974028 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:07 crc kubenswrapper[4676]: I0124 00:05:07.974050 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:07Z","lastTransitionTime":"2026-01-24T00:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.076947 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.076978 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.076988 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.077003 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.077014 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.179339 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.179452 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.179524 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.179589 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.179599 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.255539 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.255572 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:08 crc kubenswrapper[4676]: E0124 00:05:08.255706 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:08 crc kubenswrapper[4676]: E0124 00:05:08.255821 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.281671 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.281697 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.281705 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.281716 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.281725 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.311416 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:31:18.427839008 +0000 UTC Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.384299 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.384356 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.384423 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.384455 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.384475 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.487571 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.487618 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.487630 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.487648 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.487658 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.590589 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.590643 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.590654 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.590675 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.590689 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.693724 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.693782 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.693799 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.693825 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.693842 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.796664 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.796745 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.796769 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.796794 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.796811 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.899553 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.899616 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.899634 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.899657 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:08 crc kubenswrapper[4676]: I0124 00:05:08.899674 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:08Z","lastTransitionTime":"2026-01-24T00:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.002292 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.002475 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.002498 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.002525 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.002542 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.013631 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.013863 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.014114 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.014335 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.014582 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: E0124 00:05:09.045867 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.051446 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.051780 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.051980 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.052166 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.052326 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: E0124 00:05:09.073484 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.080546 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.080804 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.080913 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.081026 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.081128 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: E0124 00:05:09.105668 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.108964 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.109009 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.109026 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.109046 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.109063 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: E0124 00:05:09.123876 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.127122 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.127162 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.127177 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.127198 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.127216 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: E0124 00:05:09.140562 4676 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T00:05:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"55c3ff0e-ee2f-473a-9424-ac0aeb395b03\\\",\\\"systemUUID\\\":\\\"d7308ad2-105f-4282-b3b4-bf5b6bfb52ce\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T00:05:09Z is after 2025-08-24T17:21:41Z" Jan 24 00:05:09 crc kubenswrapper[4676]: E0124 00:05:09.140779 4676 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.142790 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.142876 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.142939 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.142999 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.143074 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.245885 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.246210 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.246459 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.246696 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.246859 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.255468 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:09 crc kubenswrapper[4676]: E0124 00:05:09.255606 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.255822 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:09 crc kubenswrapper[4676]: E0124 00:05:09.255995 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.312024 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:50:20.757710369 +0000 UTC Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.350034 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.350297 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.350423 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.350557 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.350738 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.453753 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.453816 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.453838 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.453867 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.453887 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.556545 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.556596 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.556613 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.556657 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.556673 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.659818 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.660138 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.660273 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.660452 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.660600 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.763878 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.763937 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.763955 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.763979 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.763997 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.866558 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.866587 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.866598 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.866614 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.866625 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.969657 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.969713 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.969734 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.969760 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:09 crc kubenswrapper[4676]: I0124 00:05:09.969779 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:09Z","lastTransitionTime":"2026-01-24T00:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.072324 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.072401 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.072419 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.072446 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.072463 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.174558 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.174603 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.174619 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.174640 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.174657 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.255355 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:10 crc kubenswrapper[4676]: E0124 00:05:10.255602 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.255904 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:10 crc kubenswrapper[4676]: E0124 00:05:10.256012 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.277526 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.277578 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.277589 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.277606 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.277620 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.312895 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:05:29.951293816 +0000 UTC Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.381095 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.381164 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.381182 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.381207 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.381226 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.483929 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.483993 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.484010 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.484034 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.484051 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.587577 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.587643 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.587668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.587717 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.587740 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.690937 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.690999 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.691017 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.691044 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.691061 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.793997 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.794088 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.794106 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.794166 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.794185 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.897103 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.897161 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.897180 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.897204 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:10 crc kubenswrapper[4676]: I0124 00:05:10.897221 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:10Z","lastTransitionTime":"2026-01-24T00:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.001183 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.001253 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.001265 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.001282 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.001304 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.103997 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.104049 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.104065 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.104089 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.104106 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.207521 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.207573 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.207584 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.207601 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.207615 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.255098 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.255153 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:11 crc kubenswrapper[4676]: E0124 00:05:11.255287 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:11 crc kubenswrapper[4676]: E0124 00:05:11.255430 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.311593 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.311695 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.311751 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.311776 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.311797 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.313881 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:09:59.620620673 +0000 UTC Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.415808 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.415859 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.415875 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.415902 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.415920 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.518985 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.519037 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.519059 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.519087 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.519108 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.623411 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.623761 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.623780 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.623804 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.623821 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.726416 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.726483 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.726505 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.726534 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.726556 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.829249 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.829329 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.829341 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.829364 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.829410 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.933834 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.933889 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.933905 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.933932 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:11 crc kubenswrapper[4676]: I0124 00:05:11.933950 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:11Z","lastTransitionTime":"2026-01-24T00:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.037356 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.037439 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.037455 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.037478 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.037495 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.140193 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.140252 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.140268 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.140291 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.140307 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.242753 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.242841 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.242864 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.242895 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.242914 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.255130 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.255240 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:12 crc kubenswrapper[4676]: E0124 00:05:12.255344 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:12 crc kubenswrapper[4676]: E0124 00:05:12.255567 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.314023 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:30:14.928363648 +0000 UTC Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.345575 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.345642 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.345660 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.345685 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.345703 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.449425 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.449496 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.449518 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.449545 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.449566 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.552610 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.552668 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.552685 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.552708 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.552729 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.656275 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.656338 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.656356 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.656413 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.656434 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.759731 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.759810 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.759862 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.759893 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.759915 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.862485 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.862549 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.862568 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.862592 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.862609 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.975030 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.975098 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.975119 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.975148 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:12 crc kubenswrapper[4676]: I0124 00:05:12.975169 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:12Z","lastTransitionTime":"2026-01-24T00:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.078292 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.078415 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.078452 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.078483 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.078505 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.180832 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.180880 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.180896 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.180921 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.180939 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.255184 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.255204 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:13 crc kubenswrapper[4676]: E0124 00:05:13.255360 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:13 crc kubenswrapper[4676]: E0124 00:05:13.255537 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.256665 4676 scope.go:117] "RemoveContainer" containerID="dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9" Jan 24 00:05:13 crc kubenswrapper[4676]: E0124 00:05:13.256949 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ld569_openshift-ovn-kubernetes(24f0dc26-0857-430f-aebd-073fcfcc1c0a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.283305 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.283354 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.283371 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.283416 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.283435 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.314993 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:50:48.041864878 +0000 UTC Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.385941 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.385990 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.386000 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.386017 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.386028 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.487958 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.488003 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.488014 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.488030 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.488040 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.590587 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.590645 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.590661 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.590684 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.590703 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.693930 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.694092 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.694118 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.694186 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.694203 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.797340 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.797398 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.797409 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.797423 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.797433 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.899945 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.900005 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.900027 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.900056 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:13 crc kubenswrapper[4676]: I0124 00:05:13.900077 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:13Z","lastTransitionTime":"2026-01-24T00:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.002798 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.002833 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.002842 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.002857 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.002868 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.105835 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.105875 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.105887 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.105902 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.105914 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.209805 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.209866 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.209884 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.209912 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.209930 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.255777 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.255884 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:14 crc kubenswrapper[4676]: E0124 00:05:14.256275 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:14 crc kubenswrapper[4676]: E0124 00:05:14.256546 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.313030 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.313103 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.313122 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.313147 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.313164 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.315119 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:05:29.76866111 +0000 UTC Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.415975 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.416039 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.416065 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.416094 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.416115 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.419729 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:14 crc kubenswrapper[4676]: E0124 00:05:14.419949 4676 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:05:14 crc kubenswrapper[4676]: E0124 00:05:14.420051 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs podName:18335446-e572-4741-ad9e-e7aadee7550b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:18.420024754 +0000 UTC m=+162.449995785 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs") pod "network-metrics-daemon-r4q22" (UID: "18335446-e572-4741-ad9e-e7aadee7550b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.519369 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.519456 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.519473 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.519501 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.519518 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.623047 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.623092 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.623108 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.623131 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.623147 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.726042 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.726102 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.726119 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.726141 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.726158 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.828989 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.829053 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.829079 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.829104 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.829121 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.932756 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.932855 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.932881 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.932914 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:14 crc kubenswrapper[4676]: I0124 00:05:14.932933 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:14Z","lastTransitionTime":"2026-01-24T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.035788 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.035828 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.035837 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.035907 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.035919 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.139567 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.139645 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.139669 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.139699 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.139722 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.242674 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.242746 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.242764 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.242789 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.242810 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.255022 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.255074 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:15 crc kubenswrapper[4676]: E0124 00:05:15.255270 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:15 crc kubenswrapper[4676]: E0124 00:05:15.255471 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.315435 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:58:38.852511166 +0000 UTC Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.345431 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.345503 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.345524 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.345607 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.345626 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.448805 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.448857 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.448873 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.448897 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.448911 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.551770 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.551829 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.551846 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.551870 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.551886 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.654806 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.654877 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.654896 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.654922 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.654943 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.758040 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.758106 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.758126 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.758156 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.758172 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.861648 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.861684 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.861692 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.861705 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.861714 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.963891 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.963947 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.963964 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.963989 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:15 crc kubenswrapper[4676]: I0124 00:05:15.964007 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:15Z","lastTransitionTime":"2026-01-24T00:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.067725 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.067895 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.067967 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.068004 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.068074 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.171645 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.171722 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.171745 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.171777 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.171803 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.255287 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.255309 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:16 crc kubenswrapper[4676]: E0124 00:05:16.255492 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:16 crc kubenswrapper[4676]: E0124 00:05:16.255620 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.275107 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.275179 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.275196 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.275221 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.275238 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.315750 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:22:25.56358743 +0000 UTC Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.341734 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5dg9q" podStartSLOduration=81.341680189 podStartE2EDuration="1m21.341680189s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.298718726 +0000 UTC m=+100.328689787" watchObservedRunningTime="2026-01-24 00:05:16.341680189 +0000 UTC m=+100.371651230" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.360711 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7m8ts" podStartSLOduration=80.360687553 podStartE2EDuration="1m20.360687553s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.359340159 +0000 UTC m=+100.389311200" watchObservedRunningTime="2026-01-24 00:05:16.360687553 +0000 UTC m=+100.390658584" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.377552 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.377594 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.377611 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.377633 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.377649 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.451578 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4bcxm" podStartSLOduration=81.45154997 podStartE2EDuration="1m21.45154997s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.45124015 +0000 UTC m=+100.481211181" watchObservedRunningTime="2026-01-24 00:05:16.45154997 +0000 UTC m=+100.481521011" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.452095 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ppmcr" podStartSLOduration=81.452085347 podStartE2EDuration="1m21.452085347s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.437223648 +0000 UTC m=+100.467194689" watchObservedRunningTime="2026-01-24 00:05:16.452085347 +0000 UTC m=+100.482056388" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.480278 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.480330 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.480343 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.480360 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.480373 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.494894 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.494875016 podStartE2EDuration="1m18.494875016s" podCreationTimestamp="2026-01-24 00:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.494799954 +0000 UTC m=+100.524770995" watchObservedRunningTime="2026-01-24 00:05:16.494875016 +0000 UTC m=+100.524846007" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.540904 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.540875358 podStartE2EDuration="1m20.540875358s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.53998465 +0000 UTC m=+100.569955661" watchObservedRunningTime="2026-01-24 00:05:16.540875358 +0000 UTC m=+100.570846369" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.557929 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.557899776 podStartE2EDuration="52.557899776s" podCreationTimestamp="2026-01-24 00:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.557076709 +0000 UTC m=+100.587047730" watchObservedRunningTime="2026-01-24 00:05:16.557899776 +0000 UTC m=+100.587870787" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.585807 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.585858 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.585871 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.585891 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.585905 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.617004 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-x57xf" podStartSLOduration=81.61698195 podStartE2EDuration="1m21.61698195s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.600676585 +0000 UTC m=+100.630647586" watchObservedRunningTime="2026-01-24 00:05:16.61698195 +0000 UTC m=+100.646952951" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.618190 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podStartSLOduration=81.618164138 podStartE2EDuration="1m21.618164138s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.616580797 +0000 UTC m=+100.646551798" watchObservedRunningTime="2026-01-24 00:05:16.618164138 +0000 UTC m=+100.648135139" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.631266 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.63124431 podStartE2EDuration="17.63124431s" podCreationTimestamp="2026-01-24 00:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.630737873 +0000 UTC m=+100.660708884" watchObservedRunningTime="2026-01-24 00:05:16.63124431 +0000 UTC m=+100.661215311" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.664648 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.664626205 podStartE2EDuration="1m21.664626205s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:16.649863409 +0000 UTC m=+100.679834430" watchObservedRunningTime="2026-01-24 00:05:16.664626205 +0000 UTC m=+100.694597216" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.687983 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.688046 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.688058 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.688075 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.688088 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.791137 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.791218 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.791237 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.791264 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.791282 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.893919 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.893973 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.893991 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.894055 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.894073 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.997051 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.997099 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.997115 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.997133 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:16 crc kubenswrapper[4676]: I0124 00:05:16.997145 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:16Z","lastTransitionTime":"2026-01-24T00:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.099993 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.100034 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.100046 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.100064 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.100075 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.203514 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.203564 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.203581 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.203602 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.203629 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.255295 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.255295 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:17 crc kubenswrapper[4676]: E0124 00:05:17.255732 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:17 crc kubenswrapper[4676]: E0124 00:05:17.255559 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.306930 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.306979 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.306996 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.307018 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.307035 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.316827 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:34:45.480449053 +0000 UTC Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.410914 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.410996 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.411024 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.411525 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.411553 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.514843 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.514914 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.514942 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.514970 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.514992 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.617848 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.617893 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.617909 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.617932 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.617949 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.721119 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.721178 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.721197 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.721219 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.721235 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.823937 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.824299 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.824464 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.824621 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.824756 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.927830 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.927895 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.927914 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.927939 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:17 crc kubenswrapper[4676]: I0124 00:05:17.927956 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:17Z","lastTransitionTime":"2026-01-24T00:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.031153 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.031213 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.031237 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.031268 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.031292 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.133725 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.133788 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.133810 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.133839 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.133865 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.236030 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.236098 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.236119 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.236144 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.236160 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.254685 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.254763 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:18 crc kubenswrapper[4676]: E0124 00:05:18.254850 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:18 crc kubenswrapper[4676]: E0124 00:05:18.254979 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.317530 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:45:07.92895782 +0000 UTC Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.338345 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.338406 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.338417 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.338435 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.338444 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.441654 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.441692 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.441704 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.441719 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.441730 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.544907 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.544973 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.544996 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.545026 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.545047 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.648428 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.648481 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.648498 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.648546 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.648565 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.751690 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.751751 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.751768 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.751796 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.751817 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.854878 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.854951 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.854970 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.854998 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.855018 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.957537 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.957591 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.957608 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.957632 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:18 crc kubenswrapper[4676]: I0124 00:05:18.957649 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:18Z","lastTransitionTime":"2026-01-24T00:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.060428 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.060556 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.060586 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.060644 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.060668 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:19Z","lastTransitionTime":"2026-01-24T00:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.163334 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.163429 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.163453 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.163482 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.163502 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:19Z","lastTransitionTime":"2026-01-24T00:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.254752 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.254797 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:19 crc kubenswrapper[4676]: E0124 00:05:19.254864 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:19 crc kubenswrapper[4676]: E0124 00:05:19.254996 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.266431 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.266494 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.266511 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.266535 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.266553 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:19Z","lastTransitionTime":"2026-01-24T00:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.317868 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:23:19.262423048 +0000 UTC Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.368136 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.368178 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.368188 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.368202 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.368214 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:19Z","lastTransitionTime":"2026-01-24T00:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.424370 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.424423 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.424431 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.424445 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.424457 4676 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:05:19Z","lastTransitionTime":"2026-01-24T00:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.488510 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t"] Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.489019 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.495047 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.495356 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.495607 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.495861 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.573630 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c661b89-85ef-4810-9f35-7a3157d92f19-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.573729 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c661b89-85ef-4810-9f35-7a3157d92f19-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.573785 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c661b89-85ef-4810-9f35-7a3157d92f19-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.573981 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c661b89-85ef-4810-9f35-7a3157d92f19-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.574042 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c661b89-85ef-4810-9f35-7a3157d92f19-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.676162 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c661b89-85ef-4810-9f35-7a3157d92f19-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.676232 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c661b89-85ef-4810-9f35-7a3157d92f19-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.676286 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c661b89-85ef-4810-9f35-7a3157d92f19-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.676296 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c661b89-85ef-4810-9f35-7a3157d92f19-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.676341 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c661b89-85ef-4810-9f35-7a3157d92f19-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.676432 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c661b89-85ef-4810-9f35-7a3157d92f19-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.676620 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c661b89-85ef-4810-9f35-7a3157d92f19-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.678618 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c661b89-85ef-4810-9f35-7a3157d92f19-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.684801 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c661b89-85ef-4810-9f35-7a3157d92f19-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.711295 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c661b89-85ef-4810-9f35-7a3157d92f19-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mfp4t\" (UID: \"9c661b89-85ef-4810-9f35-7a3157d92f19\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.853902 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" Jan 24 00:05:19 crc kubenswrapper[4676]: I0124 00:05:19.960980 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" event={"ID":"9c661b89-85ef-4810-9f35-7a3157d92f19","Type":"ContainerStarted","Data":"6b81bde11a23224074d6a8a11874f6f0e34b2aac36339b4d8136f6f43fd02a68"} Jan 24 00:05:20 crc kubenswrapper[4676]: I0124 00:05:20.255802 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:20 crc kubenswrapper[4676]: E0124 00:05:20.256247 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:20 crc kubenswrapper[4676]: I0124 00:05:20.256627 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:20 crc kubenswrapper[4676]: E0124 00:05:20.256827 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:20 crc kubenswrapper[4676]: I0124 00:05:20.318754 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:28:22.979160938 +0000 UTC Jan 24 00:05:20 crc kubenswrapper[4676]: I0124 00:05:20.318887 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 24 00:05:20 crc kubenswrapper[4676]: I0124 00:05:20.330698 4676 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 00:05:20 crc kubenswrapper[4676]: I0124 00:05:20.966582 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" event={"ID":"9c661b89-85ef-4810-9f35-7a3157d92f19","Type":"ContainerStarted","Data":"d2bc90ef0b210390f6be3cbc85c806f3507e98e1817f4c761f79eb661e0e26e8"} Jan 24 00:05:21 crc kubenswrapper[4676]: I0124 00:05:21.255575 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:21 crc kubenswrapper[4676]: E0124 00:05:21.255733 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:21 crc kubenswrapper[4676]: I0124 00:05:21.255577 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:21 crc kubenswrapper[4676]: E0124 00:05:21.256027 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:22 crc kubenswrapper[4676]: I0124 00:05:22.254909 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:22 crc kubenswrapper[4676]: I0124 00:05:22.254940 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:22 crc kubenswrapper[4676]: E0124 00:05:22.256010 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:22 crc kubenswrapper[4676]: E0124 00:05:22.256569 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:23 crc kubenswrapper[4676]: I0124 00:05:23.255510 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:23 crc kubenswrapper[4676]: I0124 00:05:23.255568 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:23 crc kubenswrapper[4676]: E0124 00:05:23.255682 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:23 crc kubenswrapper[4676]: E0124 00:05:23.255790 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:24 crc kubenswrapper[4676]: I0124 00:05:24.255051 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:24 crc kubenswrapper[4676]: I0124 00:05:24.255083 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:24 crc kubenswrapper[4676]: E0124 00:05:24.255229 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:24 crc kubenswrapper[4676]: E0124 00:05:24.255363 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:25 crc kubenswrapper[4676]: I0124 00:05:25.255341 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:25 crc kubenswrapper[4676]: I0124 00:05:25.255430 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:25 crc kubenswrapper[4676]: E0124 00:05:25.255560 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:25 crc kubenswrapper[4676]: E0124 00:05:25.255693 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:26 crc kubenswrapper[4676]: I0124 00:05:26.255681 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:26 crc kubenswrapper[4676]: I0124 00:05:26.255681 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:26 crc kubenswrapper[4676]: E0124 00:05:26.257997 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:26 crc kubenswrapper[4676]: E0124 00:05:26.257722 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:27 crc kubenswrapper[4676]: I0124 00:05:27.255573 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:27 crc kubenswrapper[4676]: I0124 00:05:27.255827 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:27 crc kubenswrapper[4676]: E0124 00:05:27.255946 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:27 crc kubenswrapper[4676]: E0124 00:05:27.256110 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.255605 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:28 crc kubenswrapper[4676]: E0124 00:05:28.256090 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.256325 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.256522 4676 scope.go:117] "RemoveContainer" containerID="dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9" Jan 24 00:05:28 crc kubenswrapper[4676]: E0124 00:05:28.256651 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.992849 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/1.log" Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.993234 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/0.log" Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.993262 4676 generic.go:334] "Generic (PLEG): container finished" podID="b88e9d2e-35da-45a8-ac7e-22afd660ff9f" containerID="cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b" exitCode=1 Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.993306 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x57xf" event={"ID":"b88e9d2e-35da-45a8-ac7e-22afd660ff9f","Type":"ContainerDied","Data":"cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b"} Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.993340 4676 scope.go:117] "RemoveContainer" containerID="db43410c7c6a0f160ce59403dc22a9b216d73ef62bebd77daf8f6e6818ed733c" Jan 24 00:05:28 crc kubenswrapper[4676]: I0124 00:05:28.993691 4676 scope.go:117] "RemoveContainer" containerID="cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b" Jan 24 00:05:28 crc kubenswrapper[4676]: E0124 00:05:28.993824 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-x57xf_openshift-multus(b88e9d2e-35da-45a8-ac7e-22afd660ff9f)\"" pod="openshift-multus/multus-x57xf" podUID="b88e9d2e-35da-45a8-ac7e-22afd660ff9f" Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.003105 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/3.log" Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.007538 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerStarted","Data":"b097984a7c57abb09d2fc1362781982b54bf19474644ee1742c87013905c7faf"} Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.008354 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.021801 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mfp4t" podStartSLOduration=94.021786014 podStartE2EDuration="1m34.021786014s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:20.988732808 +0000 UTC m=+105.018703849" watchObservedRunningTime="2026-01-24 00:05:29.021786014 +0000 UTC m=+113.051757025" Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.060609 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podStartSLOduration=94.060590084 podStartE2EDuration="1m34.060590084s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:29.059270512 +0000 UTC m=+113.089241523" watchObservedRunningTime="2026-01-24 00:05:29.060590084 +0000 UTC m=+113.090561085" Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.250020 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r4q22"] Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.250182 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:29 crc kubenswrapper[4676]: E0124 00:05:29.250320 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.254959 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:29 crc kubenswrapper[4676]: E0124 00:05:29.255125 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:29 crc kubenswrapper[4676]: I0124 00:05:29.255494 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:29 crc kubenswrapper[4676]: E0124 00:05:29.255666 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:30 crc kubenswrapper[4676]: I0124 00:05:30.013612 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/1.log" Jan 24 00:05:30 crc kubenswrapper[4676]: I0124 00:05:30.254982 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:30 crc kubenswrapper[4676]: E0124 00:05:30.255164 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:31 crc kubenswrapper[4676]: I0124 00:05:31.254818 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:31 crc kubenswrapper[4676]: I0124 00:05:31.254918 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:31 crc kubenswrapper[4676]: I0124 00:05:31.254919 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:31 crc kubenswrapper[4676]: E0124 00:05:31.255017 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:31 crc kubenswrapper[4676]: E0124 00:05:31.255160 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:31 crc kubenswrapper[4676]: E0124 00:05:31.255267 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:32 crc kubenswrapper[4676]: I0124 00:05:32.254864 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:32 crc kubenswrapper[4676]: E0124 00:05:32.255044 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:33 crc kubenswrapper[4676]: I0124 00:05:33.254898 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:33 crc kubenswrapper[4676]: I0124 00:05:33.254907 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:33 crc kubenswrapper[4676]: I0124 00:05:33.254902 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:33 crc kubenswrapper[4676]: E0124 00:05:33.255269 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:33 crc kubenswrapper[4676]: E0124 00:05:33.255350 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:33 crc kubenswrapper[4676]: E0124 00:05:33.255045 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:34 crc kubenswrapper[4676]: I0124 00:05:34.255713 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:34 crc kubenswrapper[4676]: E0124 00:05:34.255996 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:35 crc kubenswrapper[4676]: I0124 00:05:35.255489 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:35 crc kubenswrapper[4676]: I0124 00:05:35.255584 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:35 crc kubenswrapper[4676]: I0124 00:05:35.255519 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:35 crc kubenswrapper[4676]: E0124 00:05:35.255679 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:35 crc kubenswrapper[4676]: E0124 00:05:35.255808 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:35 crc kubenswrapper[4676]: E0124 00:05:35.256002 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:36 crc kubenswrapper[4676]: E0124 00:05:36.199413 4676 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 24 00:05:36 crc kubenswrapper[4676]: I0124 00:05:36.254901 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:36 crc kubenswrapper[4676]: E0124 00:05:36.256081 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:36 crc kubenswrapper[4676]: E0124 00:05:36.364373 4676 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 00:05:37 crc kubenswrapper[4676]: I0124 00:05:37.255009 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:37 crc kubenswrapper[4676]: E0124 00:05:37.255139 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:37 crc kubenswrapper[4676]: I0124 00:05:37.255017 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:37 crc kubenswrapper[4676]: I0124 00:05:37.255038 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:37 crc kubenswrapper[4676]: E0124 00:05:37.255275 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:37 crc kubenswrapper[4676]: E0124 00:05:37.255323 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:38 crc kubenswrapper[4676]: I0124 00:05:38.254818 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:38 crc kubenswrapper[4676]: E0124 00:05:38.254998 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:39 crc kubenswrapper[4676]: I0124 00:05:39.255759 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:39 crc kubenswrapper[4676]: I0124 00:05:39.255776 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:39 crc kubenswrapper[4676]: E0124 00:05:39.255977 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:39 crc kubenswrapper[4676]: I0124 00:05:39.255793 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:39 crc kubenswrapper[4676]: E0124 00:05:39.256133 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:39 crc kubenswrapper[4676]: E0124 00:05:39.256163 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:40 crc kubenswrapper[4676]: I0124 00:05:40.255273 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:40 crc kubenswrapper[4676]: E0124 00:05:40.255725 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:40 crc kubenswrapper[4676]: I0124 00:05:40.256036 4676 scope.go:117] "RemoveContainer" containerID="cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b" Jan 24 00:05:41 crc kubenswrapper[4676]: I0124 00:05:41.066685 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/1.log" Jan 24 00:05:41 crc kubenswrapper[4676]: I0124 00:05:41.067019 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x57xf" event={"ID":"b88e9d2e-35da-45a8-ac7e-22afd660ff9f","Type":"ContainerStarted","Data":"503448e193566525ada0f32c12c8a2978a0f18fbc763208a99e7e6534727cec5"} Jan 24 00:05:41 crc kubenswrapper[4676]: I0124 00:05:41.255663 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:41 crc kubenswrapper[4676]: I0124 00:05:41.255766 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:41 crc kubenswrapper[4676]: E0124 00:05:41.255822 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:41 crc kubenswrapper[4676]: I0124 00:05:41.255923 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:41 crc kubenswrapper[4676]: E0124 00:05:41.255959 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:41 crc kubenswrapper[4676]: E0124 00:05:41.256124 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:41 crc kubenswrapper[4676]: E0124 00:05:41.366168 4676 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 00:05:42 crc kubenswrapper[4676]: I0124 00:05:42.254865 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:42 crc kubenswrapper[4676]: E0124 00:05:42.255508 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:43 crc kubenswrapper[4676]: I0124 00:05:43.255451 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:43 crc kubenswrapper[4676]: I0124 00:05:43.255576 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:43 crc kubenswrapper[4676]: I0124 00:05:43.255474 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:43 crc kubenswrapper[4676]: E0124 00:05:43.255635 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:43 crc kubenswrapper[4676]: E0124 00:05:43.255777 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:43 crc kubenswrapper[4676]: E0124 00:05:43.255911 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:44 crc kubenswrapper[4676]: I0124 00:05:44.254894 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:44 crc kubenswrapper[4676]: E0124 00:05:44.255049 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:45 crc kubenswrapper[4676]: I0124 00:05:45.255485 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:45 crc kubenswrapper[4676]: I0124 00:05:45.255540 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:45 crc kubenswrapper[4676]: E0124 00:05:45.255605 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 00:05:45 crc kubenswrapper[4676]: I0124 00:05:45.255506 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:45 crc kubenswrapper[4676]: E0124 00:05:45.255694 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r4q22" podUID="18335446-e572-4741-ad9e-e7aadee7550b" Jan 24 00:05:45 crc kubenswrapper[4676]: E0124 00:05:45.255833 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 00:05:46 crc kubenswrapper[4676]: I0124 00:05:46.255404 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:46 crc kubenswrapper[4676]: E0124 00:05:46.257086 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 00:05:47 crc kubenswrapper[4676]: I0124 00:05:47.255149 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:05:47 crc kubenswrapper[4676]: I0124 00:05:47.255191 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:05:47 crc kubenswrapper[4676]: I0124 00:05:47.255285 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:05:47 crc kubenswrapper[4676]: I0124 00:05:47.257966 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 00:05:47 crc kubenswrapper[4676]: I0124 00:05:47.258671 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 00:05:47 crc kubenswrapper[4676]: I0124 00:05:47.258929 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 00:05:47 crc kubenswrapper[4676]: I0124 00:05:47.259295 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 00:05:48 crc kubenswrapper[4676]: I0124 00:05:48.255146 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:05:48 crc kubenswrapper[4676]: I0124 00:05:48.257793 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 00:05:48 crc kubenswrapper[4676]: I0124 00:05:48.260457 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.202198 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.906569 4676 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.964151 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5"] Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.964624 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.965408 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qzmmz"] Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.965722 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.966243 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jmv6d"] Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.966642 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.967467 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2scbc"] Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.968089 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.968348 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.968804 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.969745 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.969948 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.970470 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.970519 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.970564 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.972156 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.974563 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.974632 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.974893 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.975166 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.979282 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.979719 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.980172 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.980337 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.980526 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.980784 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.980967 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.981647 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.982114 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.983081 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.983197 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.983458 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.980788 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.984811 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.991258 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.992893 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.993364 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 00:05:49 crc kubenswrapper[4676]: I0124 00:05:49.993584 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.003216 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7zllh"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.003884 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.004923 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x649d"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.005410 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.006547 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.007748 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.008223 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvcv8"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.011180 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.011327 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.011496 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.014444 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.016968 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.020015 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.020431 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.020830 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29486880-rhzd2"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.021200 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.021752 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.022304 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-g2smk"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.022902 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.027012 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.027173 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.027313 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.027552 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.032957 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.033249 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.034159 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.036853 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.037905 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.044432 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.048291 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.050919 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.061966 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.062392 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.064098 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.064226 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.064309 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.064396 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.064473 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ft4kq"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.064545 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.064615 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.065294 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.065349 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.065498 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.065643 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.065759 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.065900 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.066036 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.066232 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.066405 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.066455 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.066625 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.066408 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.066784 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.066859 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.067011 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.067187 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.068331 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.068817 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.068876 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.071426 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.073367 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.073510 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.074751 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.074881 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.075007 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.076506 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.076626 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.076727 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.076839 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.076909 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.077020 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.077116 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.084557 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6m9lm"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.084955 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.087946 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.088257 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.088473 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.088974 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.089123 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.089636 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090221 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b189330a-ee63-45f1-8104-4ef173f8ee22-audit-dir\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090268 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-oauth-serving-cert\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090301 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-etcd-client\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090324 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-trusted-ca-bundle\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090349 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-audit-dir\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090370 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35efd97c-0521-428a-896d-b67490207db5-serving-cert\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090479 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-client-ca\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090508 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-etcd-serving-ca\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090531 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-serving-cert\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090562 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6c5\" (UniqueName: \"kubernetes.io/projected/35efd97c-0521-428a-896d-b67490207db5-kube-api-access-6p6c5\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090592 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-oauth-config\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090615 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090642 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090666 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c51c973-c370-41e8-b167-25d3b11418bf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090697 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qp87\" (UniqueName: \"kubernetes.io/projected/00beace7-1e83-40ed-8d92-6da0cae7817f-kube-api-access-4qp87\") pod \"image-pruner-29486880-rhzd2\" (UID: \"00beace7-1e83-40ed-8d92-6da0cae7817f\") " pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090723 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6360940-ea9d-456d-b546-5a20af404ee5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090748 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ksdz\" (UniqueName: \"kubernetes.io/projected/32de8698-4bd5-4154-92a3-76930504a72d-kube-api-access-4ksdz\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090777 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32de8698-4bd5-4154-92a3-76930504a72d-trusted-ca\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090804 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-service-ca\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090843 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/00beace7-1e83-40ed-8d92-6da0cae7817f-serviceca\") pod \"image-pruner-29486880-rhzd2\" (UID: \"00beace7-1e83-40ed-8d92-6da0cae7817f\") " pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090872 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-encryption-config\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090899 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-serving-cert\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090927 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6351c23-e315-4c92-a467-380da403d3c4-serving-cert\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090954 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9mz\" (UniqueName: \"kubernetes.io/projected/b37a2847-a94a-4a0c-b092-1ed7155a2d35-kube-api-access-nv9mz\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.090986 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87hzh\" (UniqueName: \"kubernetes.io/projected/9c51c973-c370-41e8-b167-25d3b11418bf-kube-api-access-87hzh\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091011 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-audit\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091028 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6360940-ea9d-456d-b546-5a20af404ee5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091052 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/35efd97c-0521-428a-896d-b67490207db5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091092 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6dpw\" (UniqueName: \"kubernetes.io/projected/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-kube-api-access-f6dpw\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091116 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091144 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-service-ca-bundle\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091162 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091180 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-client-ca\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091198 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b37a2847-a94a-4a0c-b092-1ed7155a2d35-serving-cert\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091214 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-config\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091233 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4cw9\" (UniqueName: \"kubernetes.io/projected/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-kube-api-access-h4cw9\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091251 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-serving-cert\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091296 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-policies\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091327 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091353 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091393 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-node-pullsecrets\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091413 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-config\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091430 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8d73604-754b-4dea-9be4-3451964e5589-machine-approver-tls\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091455 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8911791-9db1-4463-997e-1ed50da17324-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091479 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-config\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091500 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnsp\" (UniqueName: \"kubernetes.io/projected/a6360940-ea9d-456d-b546-5a20af404ee5-kube-api-access-jqnsp\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091520 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8911791-9db1-4463-997e-1ed50da17324-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091543 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c51c973-c370-41e8-b167-25d3b11418bf-images\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091566 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8d73604-754b-4dea-9be4-3451964e5589-auth-proxy-config\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091591 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-image-import-ca\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091636 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091680 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091709 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-etcd-client\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091732 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c51c973-c370-41e8-b167-25d3b11418bf-config\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091754 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9mm\" (UniqueName: \"kubernetes.io/projected/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-kube-api-access-5f9mm\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091778 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-config\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091800 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-serving-cert\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091827 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32de8698-4bd5-4154-92a3-76930504a72d-serving-cert\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091879 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091918 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmt92\" (UniqueName: \"kubernetes.io/projected/d8d73604-754b-4dea-9be4-3451964e5589-kube-api-access-cmt92\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091945 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-audit-policies\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091969 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8911791-9db1-4463-997e-1ed50da17324-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.091997 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092008 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092022 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-config\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092048 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092073 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d73604-754b-4dea-9be4-3451964e5589-config\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092102 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fftp\" (UniqueName: \"kubernetes.io/projected/f8911791-9db1-4463-997e-1ed50da17324-kube-api-access-6fftp\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092129 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8cd\" (UniqueName: \"kubernetes.io/projected/b189330a-ee63-45f1-8104-4ef173f8ee22-kube-api-access-th8cd\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092171 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092201 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32de8698-4bd5-4154-92a3-76930504a72d-config\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092252 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6vn7\" (UniqueName: \"kubernetes.io/projected/e6351c23-e315-4c92-a467-380da403d3c4-kube-api-access-q6vn7\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092277 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-dir\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.092302 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-encryption-config\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.093966 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.094543 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wvvpg"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.094910 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6m9lm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.094923 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cdrbc"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.095026 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.095087 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.095656 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.095980 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.096320 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.097191 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.097492 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.097707 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.097926 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.111476 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.111725 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.111853 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.116787 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.117087 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.120519 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.121820 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.128481 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.129031 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ck624"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.129127 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.129307 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.129630 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.129911 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.130094 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.130452 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.131190 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.131454 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zq745"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.132987 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.133334 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.133869 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.134367 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.135559 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.136572 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.136953 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6kx9j"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.138538 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.139027 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.141629 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.141961 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.142867 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.147813 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.148568 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.149086 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.149282 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.155460 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.156166 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.156799 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.156864 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.157153 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.160119 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rsw66"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.161038 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.168559 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.171083 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.172094 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.173735 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-shf2w"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.174341 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.186093 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.186844 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.187443 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.190580 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.191217 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194626 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/00beace7-1e83-40ed-8d92-6da0cae7817f-serviceca\") pod \"image-pruner-29486880-rhzd2\" (UID: \"00beace7-1e83-40ed-8d92-6da0cae7817f\") " pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194654 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/25db14a1-725f-42ae-a6e9-646546b584c7-metrics-tls\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194675 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlm94\" (UniqueName: \"kubernetes.io/projected/89435826-645d-48a2-aa3b-f5c42003dcbe-kube-api-access-tlm94\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194697 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6nf\" (UniqueName: \"kubernetes.io/projected/d8eed212-9137-45e5-8347-1f921fbedb19-kube-api-access-nk6nf\") pod \"multus-admission-controller-857f4d67dd-ck624\" (UID: \"d8eed212-9137-45e5-8347-1f921fbedb19\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194717 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87hzh\" (UniqueName: \"kubernetes.io/projected/9c51c973-c370-41e8-b167-25d3b11418bf-kube-api-access-87hzh\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194739 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-srv-cert\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194761 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977bd2bf-e652-4b16-b8fc-902d4a1d7860-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194801 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-audit\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194821 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194839 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194855 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srgg\" (UniqueName: \"kubernetes.io/projected/977bd2bf-e652-4b16-b8fc-902d4a1d7860-kube-api-access-7srgg\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194871 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-client-ca\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194886 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-apiservice-cert\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194905 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194921 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a35921c-db91-45c2-a92c-a42f5b2cab84-metrics-tls\") pod \"dns-operator-744455d44c-cdrbc\" (UID: \"8a35921c-db91-45c2-a92c-a42f5b2cab84\") " pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194939 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4bc9\" (UniqueName: \"kubernetes.io/projected/7b8f105b-569d-47f2-b564-a0830b010e31-kube-api-access-j4bc9\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194956 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-serving-cert\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194972 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-policies\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.194990 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50241f8-d135-4df8-b047-e76fb28b8a3d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195010 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195026 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-config\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195042 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8d73604-754b-4dea-9be4-3451964e5589-machine-approver-tls\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195060 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195079 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm4tl\" (UniqueName: \"kubernetes.io/projected/7af03157-ee92-4e72-a775-acaeabb73e65-kube-api-access-jm4tl\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195097 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6kdj\" (UniqueName: \"kubernetes.io/projected/9d605c02-a15d-46a8-942c-cd85e6ce5452-kube-api-access-h6kdj\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195115 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4d53a745-f985-4659-b62c-ce297ce8ce85-signing-key\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195133 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqnsp\" (UniqueName: \"kubernetes.io/projected/a6360940-ea9d-456d-b546-5a20af404ee5-kube-api-access-jqnsp\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195150 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c51c973-c370-41e8-b167-25d3b11418bf-images\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195169 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8d73604-754b-4dea-9be4-3451964e5589-auth-proxy-config\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195186 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d89ff2-f919-459e-8089-5097aab0f4e2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195203 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-image-import-ca\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195222 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195240 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195256 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d605c02-a15d-46a8-942c-cd85e6ce5452-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195272 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9350aae-3053-431e-a2d6-2137f990ca08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195289 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c51c973-c370-41e8-b167-25d3b11418bf-config\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195305 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9mm\" (UniqueName: \"kubernetes.io/projected/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-kube-api-access-5f9mm\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195322 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195337 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9350aae-3053-431e-a2d6-2137f990ca08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195352 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8eed212-9137-45e5-8347-1f921fbedb19-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ck624\" (UID: \"d8eed212-9137-45e5-8347-1f921fbedb19\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195368 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmt92\" (UniqueName: \"kubernetes.io/projected/d8d73604-754b-4dea-9be4-3451964e5589-kube-api-access-cmt92\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195403 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm9zj\" (UniqueName: \"kubernetes.io/projected/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-kube-api-access-mm9zj\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195419 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195437 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-config\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195452 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3651bf5e-f692-42b1-8d5e-512daae90cc8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195467 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d89ff2-f919-459e-8089-5097aab0f4e2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195484 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d73604-754b-4dea-9be4-3451964e5589-config\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195502 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96br9\" (UniqueName: \"kubernetes.io/projected/25db14a1-725f-42ae-a6e9-646546b584c7-kube-api-access-96br9\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195520 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8cd\" (UniqueName: \"kubernetes.io/projected/b189330a-ee63-45f1-8104-4ef173f8ee22-kube-api-access-th8cd\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195546 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-dir\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195563 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-tmpfs\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195583 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-oauth-serving-cert\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195617 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-default-certificate\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195635 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195651 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszgz\" (UniqueName: \"kubernetes.io/projected/c50241f8-d135-4df8-b047-e76fb28b8a3d-kube-api-access-gszgz\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195669 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p6c5\" (UniqueName: \"kubernetes.io/projected/35efd97c-0521-428a-896d-b67490207db5-kube-api-access-6p6c5\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195695 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-client-ca\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195727 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-etcd-serving-ca\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195751 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195768 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195783 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eabf562-d289-4685-8ee5-ed1525930d19-secret-volume\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195797 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-webhook-cert\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195813 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qp87\" (UniqueName: \"kubernetes.io/projected/00beace7-1e83-40ed-8d92-6da0cae7817f-kube-api-access-4qp87\") pod \"image-pruner-29486880-rhzd2\" (UID: \"00beace7-1e83-40ed-8d92-6da0cae7817f\") " pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195832 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6360940-ea9d-456d-b546-5a20af404ee5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195848 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ksdz\" (UniqueName: \"kubernetes.io/projected/32de8698-4bd5-4154-92a3-76930504a72d-kube-api-access-4ksdz\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195863 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-service-ca\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195879 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-stats-auth\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195897 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32de8698-4bd5-4154-92a3-76930504a72d-trusted-ca\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195912 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcwrj\" (UniqueName: \"kubernetes.io/projected/3651bf5e-f692-42b1-8d5e-512daae90cc8-kube-api-access-dcwrj\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195928 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-metrics-certs\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195945 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9mz\" (UniqueName: \"kubernetes.io/projected/b37a2847-a94a-4a0c-b092-1ed7155a2d35-kube-api-access-nv9mz\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195960 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-encryption-config\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195978 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-serving-cert\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.195999 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6351c23-e315-4c92-a467-380da403d3c4-serving-cert\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196018 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-config\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196036 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196052 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6360940-ea9d-456d-b546-5a20af404ee5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196068 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/35efd97c-0521-428a-896d-b67490207db5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196085 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6dpw\" (UniqueName: \"kubernetes.io/projected/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-kube-api-access-f6dpw\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196102 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-service-ca-bundle\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196121 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196141 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/596a60f1-3eb7-4a57-af4a-1cb1f37f2824-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ljbnb\" (UID: \"596a60f1-3eb7-4a57-af4a-1cb1f37f2824\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196159 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkrp\" (UniqueName: \"kubernetes.io/projected/4b605a66-7904-4596-a67f-ea21ef41a24b-kube-api-access-dxkrp\") pod \"downloads-7954f5f757-6m9lm\" (UID: \"4b605a66-7904-4596-a67f-ea21ef41a24b\") " pod="openshift-console/downloads-7954f5f757-6m9lm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196184 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4cw9\" (UniqueName: \"kubernetes.io/projected/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-kube-api-access-h4cw9\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196201 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b37a2847-a94a-4a0c-b092-1ed7155a2d35-serving-cert\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196216 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-config\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196232 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-node-pullsecrets\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196251 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196267 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2d89ff2-f919-459e-8089-5097aab0f4e2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196291 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8911791-9db1-4463-997e-1ed50da17324-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196306 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3651bf5e-f692-42b1-8d5e-512daae90cc8-images\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196323 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkx8\" (UniqueName: \"kubernetes.io/projected/a19dcc7f-c3e9-4aa8-90dc-412550a8060f-kube-api-access-vmkx8\") pod \"migrator-59844c95c7-hjbqw\" (UID: \"a19dcc7f-c3e9-4aa8-90dc-412550a8060f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196338 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvpn8\" (UniqueName: \"kubernetes.io/projected/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-kube-api-access-mvpn8\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196355 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-config\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196370 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3651bf5e-f692-42b1-8d5e-512daae90cc8-proxy-tls\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196403 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9350aae-3053-431e-a2d6-2137f990ca08-config\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196421 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8911791-9db1-4463-997e-1ed50da17324-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196439 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg8sc\" (UniqueName: \"kubernetes.io/projected/596a60f1-3eb7-4a57-af4a-1cb1f37f2824-kube-api-access-vg8sc\") pod \"package-server-manager-789f6589d5-ljbnb\" (UID: \"596a60f1-3eb7-4a57-af4a-1cb1f37f2824\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196456 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196474 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-etcd-client\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196490 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-config\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196507 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977bd2bf-e652-4b16-b8fc-902d4a1d7860-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196531 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-serving-cert\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196548 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25db14a1-725f-42ae-a6e9-646546b584c7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196565 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32de8698-4bd5-4154-92a3-76930504a72d-serving-cert\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196581 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvj8q\" (UniqueName: \"kubernetes.io/projected/1eabf562-d289-4685-8ee5-ed1525930d19-kube-api-access-gvj8q\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196597 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5n78\" (UniqueName: \"kubernetes.io/projected/4d53a745-f985-4659-b62c-ce297ce8ce85-kube-api-access-r5n78\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196615 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196631 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-audit-policies\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196659 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8911791-9db1-4463-997e-1ed50da17324-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196677 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196694 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4d53a745-f985-4659-b62c-ce297ce8ce85-signing-cabundle\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196710 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b8f105b-569d-47f2-b564-a0830b010e31-service-ca-bundle\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196728 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196747 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196764 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr5xr\" (UniqueName: \"kubernetes.io/projected/8a35921c-db91-45c2-a92c-a42f5b2cab84-kube-api-access-gr5xr\") pod \"dns-operator-744455d44c-cdrbc\" (UID: \"8a35921c-db91-45c2-a92c-a42f5b2cab84\") " pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196788 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d605c02-a15d-46a8-942c-cd85e6ce5452-proxy-tls\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196811 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fftp\" (UniqueName: \"kubernetes.io/projected/f8911791-9db1-4463-997e-1ed50da17324-kube-api-access-6fftp\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196834 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32de8698-4bd5-4154-92a3-76930504a72d-config\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196855 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d52fce7-049e-441e-8e40-15d044e0319a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xt6kb\" (UID: \"6d52fce7-049e-441e-8e40-15d044e0319a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196873 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-encryption-config\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196890 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6vn7\" (UniqueName: \"kubernetes.io/projected/e6351c23-e315-4c92-a467-380da403d3c4-kube-api-access-q6vn7\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196906 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50241f8-d135-4df8-b047-e76fb28b8a3d-srv-cert\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196935 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b189330a-ee63-45f1-8104-4ef173f8ee22-audit-dir\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196957 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25db14a1-725f-42ae-a6e9-646546b584c7-trusted-ca\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196975 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-etcd-client\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.196993 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-trusted-ca-bundle\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.197027 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-audit-dir\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.197050 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35efd97c-0521-428a-896d-b67490207db5-serving-cert\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.197073 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.197097 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-serving-cert\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.197120 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-oauth-config\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.197144 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.197165 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c51c973-c370-41e8-b167-25d3b11418bf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.197186 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhcs\" (UniqueName: \"kubernetes.io/projected/6d52fce7-049e-441e-8e40-15d044e0319a-kube-api-access-nvhcs\") pod \"cluster-samples-operator-665b6dd947-xt6kb\" (UID: \"6d52fce7-049e-441e-8e40-15d044e0319a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.198034 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/00beace7-1e83-40ed-8d92-6da0cae7817f-serviceca\") pod \"image-pruner-29486880-rhzd2\" (UID: \"00beace7-1e83-40ed-8d92-6da0cae7817f\") " pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.199008 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32de8698-4bd5-4154-92a3-76930504a72d-trusted-ca\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.199489 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-audit\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.199802 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c51c973-c370-41e8-b167-25d3b11418bf-config\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.200132 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.200566 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-config\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.201036 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d73604-754b-4dea-9be4-3451964e5589-config\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.201068 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-client-ca\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.201141 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-dir\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.203699 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7zllh"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.204752 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-client-ca\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.205331 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32de8698-4bd5-4154-92a3-76930504a72d-config\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.208000 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-policies\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.209193 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-etcd-serving-ca\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.217228 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-oauth-serving-cert\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.222505 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-audit-dir\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.223973 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.274396 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-serving-cert\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.224895 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.225273 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-audit-policies\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.225508 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6360940-ea9d-456d-b546-5a20af404ee5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.225755 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/35efd97c-0521-428a-896d-b67490207db5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.226258 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-service-ca-bundle\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.227199 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.230658 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.231588 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-trusted-ca-bundle\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.231606 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-encryption-config\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.231652 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b189330a-ee63-45f1-8104-4ef173f8ee22-audit-dir\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.232203 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-config\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.232569 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8d73604-754b-4dea-9be4-3451964e5589-auth-proxy-config\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.233100 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6351c23-e315-4c92-a467-380da403d3c4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.240053 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8d73604-754b-4dea-9be4-3451964e5589-machine-approver-tls\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.240099 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-node-pullsecrets\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.240476 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-config\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.243191 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.244021 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-image-import-ca\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.245679 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-config\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.245977 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6360940-ea9d-456d-b546-5a20af404ee5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.246604 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-etcd-client\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.246786 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-oauth-config\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.247123 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-encryption-config\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.247211 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b37a2847-a94a-4a0c-b092-1ed7155a2d35-serving-cert\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.247233 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvcv8"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.274881 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-serving-cert\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.248034 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32de8698-4bd5-4154-92a3-76930504a72d-serving-cert\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.274925 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jmv6d"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.248279 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.248599 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b189330a-ee63-45f1-8104-4ef173f8ee22-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.248641 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6351c23-e315-4c92-a467-380da403d3c4-serving-cert\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.275087 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c51c973-c370-41e8-b167-25d3b11418bf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.249927 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.249556 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-serving-cert\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.266390 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c51c973-c370-41e8-b167-25d3b11418bf-images\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.270602 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-config\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.274634 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8911791-9db1-4463-997e-1ed50da17324-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.269122 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-etcd-client\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.275513 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.231886 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.224295 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-service-ca\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.250279 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.247521 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35efd97c-0521-428a-896d-b67490207db5-serving-cert\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.248314 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b189330a-ee63-45f1-8104-4ef173f8ee22-serving-cert\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.284498 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.286913 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.288048 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2scbc"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.288092 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qzmmz"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.288118 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lgg5l"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.288650 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.289056 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.289453 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n8shl"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.290250 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.293847 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8911791-9db1-4463-997e-1ed50da17324-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.298536 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ft4kq"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299642 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50241f8-d135-4df8-b047-e76fb28b8a3d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299671 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299694 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm4tl\" (UniqueName: \"kubernetes.io/projected/7af03157-ee92-4e72-a775-acaeabb73e65-kube-api-access-jm4tl\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299712 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6kdj\" (UniqueName: \"kubernetes.io/projected/9d605c02-a15d-46a8-942c-cd85e6ce5452-kube-api-access-h6kdj\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299728 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4d53a745-f985-4659-b62c-ce297ce8ce85-signing-key\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299751 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299789 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d89ff2-f919-459e-8089-5097aab0f4e2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299806 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d605c02-a15d-46a8-942c-cd85e6ce5452-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299822 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299836 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9350aae-3053-431e-a2d6-2137f990ca08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299852 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9350aae-3053-431e-a2d6-2137f990ca08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299873 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8eed212-9137-45e5-8347-1f921fbedb19-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ck624\" (UID: \"d8eed212-9137-45e5-8347-1f921fbedb19\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299895 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm9zj\" (UniqueName: \"kubernetes.io/projected/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-kube-api-access-mm9zj\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299912 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299930 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3651bf5e-f692-42b1-8d5e-512daae90cc8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299945 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d89ff2-f919-459e-8089-5097aab0f4e2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299961 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96br9\" (UniqueName: \"kubernetes.io/projected/25db14a1-725f-42ae-a6e9-646546b584c7-kube-api-access-96br9\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299982 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-tmpfs\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.299999 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-default-certificate\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300016 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300030 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gszgz\" (UniqueName: \"kubernetes.io/projected/c50241f8-d135-4df8-b047-e76fb28b8a3d-kube-api-access-gszgz\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300062 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300087 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eabf562-d289-4685-8ee5-ed1525930d19-secret-volume\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300104 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-webhook-cert\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300125 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-stats-auth\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300139 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcwrj\" (UniqueName: \"kubernetes.io/projected/3651bf5e-f692-42b1-8d5e-512daae90cc8-kube-api-access-dcwrj\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300155 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-metrics-certs\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300174 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-config\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300197 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300213 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxkrp\" (UniqueName: \"kubernetes.io/projected/4b605a66-7904-4596-a67f-ea21ef41a24b-kube-api-access-dxkrp\") pod \"downloads-7954f5f757-6m9lm\" (UID: \"4b605a66-7904-4596-a67f-ea21ef41a24b\") " pod="openshift-console/downloads-7954f5f757-6m9lm" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300231 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/596a60f1-3eb7-4a57-af4a-1cb1f37f2824-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ljbnb\" (UID: \"596a60f1-3eb7-4a57-af4a-1cb1f37f2824\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300251 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300266 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2d89ff2-f919-459e-8089-5097aab0f4e2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300282 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkx8\" (UniqueName: \"kubernetes.io/projected/a19dcc7f-c3e9-4aa8-90dc-412550a8060f-kube-api-access-vmkx8\") pod \"migrator-59844c95c7-hjbqw\" (UID: \"a19dcc7f-c3e9-4aa8-90dc-412550a8060f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300297 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3651bf5e-f692-42b1-8d5e-512daae90cc8-images\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.300312 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3651bf5e-f692-42b1-8d5e-512daae90cc8-proxy-tls\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.301251 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.301520 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-tmpfs\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.302082 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3651bf5e-f692-42b1-8d5e-512daae90cc8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.302762 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d605c02-a15d-46a8-942c-cd85e6ce5452-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.302765 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303029 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvpn8\" (UniqueName: \"kubernetes.io/projected/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-kube-api-access-mvpn8\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303064 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9350aae-3053-431e-a2d6-2137f990ca08-config\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303082 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg8sc\" (UniqueName: \"kubernetes.io/projected/596a60f1-3eb7-4a57-af4a-1cb1f37f2824-kube-api-access-vg8sc\") pod \"package-server-manager-789f6589d5-ljbnb\" (UID: \"596a60f1-3eb7-4a57-af4a-1cb1f37f2824\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303099 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977bd2bf-e652-4b16-b8fc-902d4a1d7860-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303116 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25db14a1-725f-42ae-a6e9-646546b584c7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303134 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvj8q\" (UniqueName: \"kubernetes.io/projected/1eabf562-d289-4685-8ee5-ed1525930d19-kube-api-access-gvj8q\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303153 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5n78\" (UniqueName: \"kubernetes.io/projected/4d53a745-f985-4659-b62c-ce297ce8ce85-kube-api-access-r5n78\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303173 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4d53a745-f985-4659-b62c-ce297ce8ce85-signing-cabundle\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303191 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr5xr\" (UniqueName: \"kubernetes.io/projected/8a35921c-db91-45c2-a92c-a42f5b2cab84-kube-api-access-gr5xr\") pod \"dns-operator-744455d44c-cdrbc\" (UID: \"8a35921c-db91-45c2-a92c-a42f5b2cab84\") " pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303206 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d605c02-a15d-46a8-942c-cd85e6ce5452-proxy-tls\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303221 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b8f105b-569d-47f2-b564-a0830b010e31-service-ca-bundle\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303238 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303258 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d52fce7-049e-441e-8e40-15d044e0319a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xt6kb\" (UID: \"6d52fce7-049e-441e-8e40-15d044e0319a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303274 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50241f8-d135-4df8-b047-e76fb28b8a3d-srv-cert\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303306 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25db14a1-725f-42ae-a6e9-646546b584c7-trusted-ca\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303331 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303345 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvhcs\" (UniqueName: \"kubernetes.io/projected/6d52fce7-049e-441e-8e40-15d044e0319a-kube-api-access-nvhcs\") pod \"cluster-samples-operator-665b6dd947-xt6kb\" (UID: \"6d52fce7-049e-441e-8e40-15d044e0319a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303362 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/25db14a1-725f-42ae-a6e9-646546b584c7-metrics-tls\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303402 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk6nf\" (UniqueName: \"kubernetes.io/projected/d8eed212-9137-45e5-8347-1f921fbedb19-kube-api-access-nk6nf\") pod \"multus-admission-controller-857f4d67dd-ck624\" (UID: \"d8eed212-9137-45e5-8347-1f921fbedb19\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303421 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlm94\" (UniqueName: \"kubernetes.io/projected/89435826-645d-48a2-aa3b-f5c42003dcbe-kube-api-access-tlm94\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303448 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-srv-cert\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303463 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977bd2bf-e652-4b16-b8fc-902d4a1d7860-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303489 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303504 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7srgg\" (UniqueName: \"kubernetes.io/projected/977bd2bf-e652-4b16-b8fc-902d4a1d7860-kube-api-access-7srgg\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303521 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-apiservice-cert\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303541 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a35921c-db91-45c2-a92c-a42f5b2cab84-metrics-tls\") pod \"dns-operator-744455d44c-cdrbc\" (UID: \"8a35921c-db91-45c2-a92c-a42f5b2cab84\") " pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303558 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4bc9\" (UniqueName: \"kubernetes.io/projected/7b8f105b-569d-47f2-b564-a0830b010e31-kube-api-access-j4bc9\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.303117 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.304170 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3651bf5e-f692-42b1-8d5e-512daae90cc8-images\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.304848 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29486880-rhzd2"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.306564 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.308840 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a35921c-db91-45c2-a92c-a42f5b2cab84-metrics-tls\") pod \"dns-operator-744455d44c-cdrbc\" (UID: \"8a35921c-db91-45c2-a92c-a42f5b2cab84\") " pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.309248 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3651bf5e-f692-42b1-8d5e-512daae90cc8-proxy-tls\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.310054 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.310187 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.313165 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.314277 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cdrbc"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.316635 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.317543 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.317731 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-spx5f"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.319113 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-spx5f" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.319332 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ck624"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.323358 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2smk"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.325820 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.325991 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6m9lm"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.326215 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.327272 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-srv-cert\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.327799 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.328342 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.329732 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6kx9j"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.330678 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.331651 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wvvpg"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.332938 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.334079 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x649d"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.335504 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.336468 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.337611 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.338586 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.339633 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.340669 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.341698 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-w7sr4"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.342670 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.343140 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rsw66"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.344192 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.345172 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-64phz"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.346281 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.346370 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.346512 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.347453 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-spx5f"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.348494 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.349452 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.350522 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.351874 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-64phz"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.352785 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.353753 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n8shl"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.354774 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w7sr4"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.355874 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-shf2w"] Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.366912 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.375406 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c50241f8-d135-4df8-b047-e76fb28b8a3d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.375456 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eabf562-d289-4685-8ee5-ed1525930d19-secret-volume\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.376106 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.387089 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.406659 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.426751 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.439545 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d52fce7-049e-441e-8e40-15d044e0319a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xt6kb\" (UID: \"6d52fce7-049e-441e-8e40-15d044e0319a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.447505 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.467772 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.486314 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.498615 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/25db14a1-725f-42ae-a6e9-646546b584c7-metrics-tls\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.506943 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.527572 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.546466 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.572294 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.576681 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25db14a1-725f-42ae-a6e9-646546b584c7-trusted-ca\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.586462 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.594994 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8eed212-9137-45e5-8347-1f921fbedb19-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ck624\" (UID: \"d8eed212-9137-45e5-8347-1f921fbedb19\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.606227 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.626632 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.642026 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/596a60f1-3eb7-4a57-af4a-1cb1f37f2824-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ljbnb\" (UID: \"596a60f1-3eb7-4a57-af4a-1cb1f37f2824\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.647209 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.667574 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.687592 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.707799 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.727503 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.735627 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-default-certificate\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.748619 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.755109 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-stats-auth\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.767877 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.777437 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b8f105b-569d-47f2-b564-a0830b010e31-metrics-certs\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.787207 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.795923 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b8f105b-569d-47f2-b564-a0830b010e31-service-ca-bundle\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.808142 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.847797 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.868455 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.886804 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.895277 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d89ff2-f919-459e-8089-5097aab0f4e2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.908045 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.915557 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d89ff2-f919-459e-8089-5097aab0f4e2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.927037 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.935934 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-webhook-cert\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.938632 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-apiservice-cert\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.947431 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.967484 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.977297 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4d53a745-f985-4659-b62c-ce297ce8ce85-signing-key\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:50 crc kubenswrapper[4676]: I0124 00:05:50.987029 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.007872 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.027933 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.035685 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4d53a745-f985-4659-b62c-ce297ce8ce85-signing-cabundle\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.047201 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.058855 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d605c02-a15d-46a8-942c-cd85e6ce5452-proxy-tls\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.067199 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.087407 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.099342 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c50241f8-d135-4df8-b047-e76fb28b8a3d-srv-cert\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.106916 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.116761 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977bd2bf-e652-4b16-b8fc-902d4a1d7860-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.127691 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.137617 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977bd2bf-e652-4b16-b8fc-902d4a1d7860-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.148323 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.165833 4676 request.go:700] Waited for 1.016067431s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.167303 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.186518 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.206890 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.233715 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.247441 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.255998 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9350aae-3053-431e-a2d6-2137f990ca08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.269875 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.274471 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9350aae-3053-431e-a2d6-2137f990ca08-config\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.287502 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.302265 4676 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.302289 4676 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.302352 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca podName:7af03157-ee92-4e72-a775-acaeabb73e65 nodeName:}" failed. No retries permitted until 2026-01-24 00:05:51.802325304 +0000 UTC m=+135.832296345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca") pod "marketplace-operator-79b997595-rsw66" (UID: "7af03157-ee92-4e72-a775-acaeabb73e65") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.302356 4676 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.302414 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-serving-cert podName:2f5e70d9-8b16-4684-bd98-4287ccbb6d85 nodeName:}" failed. No retries permitted until 2026-01-24 00:05:51.802367135 +0000 UTC m=+135.832338176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-serving-cert") pod "kube-controller-manager-operator-78b949d7b-975pl" (UID: "2f5e70d9-8b16-4684-bd98-4287ccbb6d85") : failed to sync secret cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.302452 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics podName:7af03157-ee92-4e72-a775-acaeabb73e65 nodeName:}" failed. No retries permitted until 2026-01-24 00:05:51.802433797 +0000 UTC m=+135.832404838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics") pod "marketplace-operator-79b997595-rsw66" (UID: "7af03157-ee92-4e72-a775-acaeabb73e65") : failed to sync secret cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.303656 4676 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.303729 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-config podName:2f5e70d9-8b16-4684-bd98-4287ccbb6d85 nodeName:}" failed. No retries permitted until 2026-01-24 00:05:51.803704589 +0000 UTC m=+135.833675640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-config") pod "kube-controller-manager-operator-78b949d7b-975pl" (UID: "2f5e70d9-8b16-4684-bd98-4287ccbb6d85") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.305707 4676 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: E0124 00:05:51.305944 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume podName:1eabf562-d289-4685-8ee5-ed1525930d19 nodeName:}" failed. No retries permitted until 2026-01-24 00:05:51.80593158 +0000 UTC m=+135.835902591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume") pod "collect-profiles-29486880-7srrg" (UID: "1eabf562-d289-4685-8ee5-ed1525930d19") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.307578 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.327303 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.347088 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.377870 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.387775 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.407052 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.427603 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.447198 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.467095 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.487612 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.507672 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.527077 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.547690 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.567013 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.586607 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.606774 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.626616 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.647073 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.668671 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.687558 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.707718 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.767146 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9mz\" (UniqueName: \"kubernetes.io/projected/b37a2847-a94a-4a0c-b092-1ed7155a2d35-kube-api-access-nv9mz\") pod \"controller-manager-879f6c89f-qzmmz\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.787591 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87hzh\" (UniqueName: \"kubernetes.io/projected/9c51c973-c370-41e8-b167-25d3b11418bf-kube-api-access-87hzh\") pod \"machine-api-operator-5694c8668f-jmv6d\" (UID: \"9c51c973-c370-41e8-b167-25d3b11418bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.801661 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.804636 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9mm\" (UniqueName: \"kubernetes.io/projected/2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e-kube-api-access-5f9mm\") pod \"apiserver-76f77b778f-2scbc\" (UID: \"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e\") " pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.818675 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.822067 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmt92\" (UniqueName: \"kubernetes.io/projected/d8d73604-754b-4dea-9be4-3451964e5589-kube-api-access-cmt92\") pod \"machine-approver-56656f9798-ml6vn\" (UID: \"d8d73604-754b-4dea-9be4-3451964e5589\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.833989 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.834260 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.834417 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.834541 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.834596 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-config\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.834731 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.835597 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-config\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.837679 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.838345 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.840040 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.845444 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8cd\" (UniqueName: \"kubernetes.io/projected/b189330a-ee63-45f1-8104-4ef173f8ee22-kube-api-access-th8cd\") pod \"apiserver-7bbb656c7d-7d9xm\" (UID: \"b189330a-ee63-45f1-8104-4ef173f8ee22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.867352 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.867765 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p6c5\" (UniqueName: \"kubernetes.io/projected/35efd97c-0521-428a-896d-b67490207db5-kube-api-access-6p6c5\") pod \"openshift-config-operator-7777fb866f-7zllh\" (UID: \"35efd97c-0521-428a-896d-b67490207db5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.883414 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qp87\" (UniqueName: \"kubernetes.io/projected/00beace7-1e83-40ed-8d92-6da0cae7817f-kube-api-access-4qp87\") pod \"image-pruner-29486880-rhzd2\" (UID: \"00beace7-1e83-40ed-8d92-6da0cae7817f\") " pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.884975 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.910640 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ksdz\" (UniqueName: \"kubernetes.io/projected/32de8698-4bd5-4154-92a3-76930504a72d-kube-api-access-4ksdz\") pod \"console-operator-58897d9998-x649d\" (UID: \"32de8698-4bd5-4154-92a3-76930504a72d\") " pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.921916 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.929198 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6dpw\" (UniqueName: \"kubernetes.io/projected/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-kube-api-access-f6dpw\") pod \"console-f9d7485db-g2smk\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.947979 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.949661 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4cw9\" (UniqueName: \"kubernetes.io/projected/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-kube-api-access-h4cw9\") pod \"route-controller-manager-6576b87f9c-n6vx5\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.962741 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6vn7\" (UniqueName: \"kubernetes.io/projected/e6351c23-e315-4c92-a467-380da403d3c4-kube-api-access-q6vn7\") pod \"authentication-operator-69f744f599-ft4kq\" (UID: \"e6351c23-e315-4c92-a467-380da403d3c4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.986864 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqnsp\" (UniqueName: \"kubernetes.io/projected/a6360940-ea9d-456d-b546-5a20af404ee5-kube-api-access-jqnsp\") pod \"openshift-apiserver-operator-796bbdcf4f-zc8ts\" (UID: \"a6360940-ea9d-456d-b546-5a20af404ee5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:51 crc kubenswrapper[4676]: I0124 00:05:51.994793 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.004310 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fftp\" (UniqueName: \"kubernetes.io/projected/f8911791-9db1-4463-997e-1ed50da17324-kube-api-access-6fftp\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.028860 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.028665 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.032586 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8911791-9db1-4463-997e-1ed50da17324-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkpbj\" (UID: \"f8911791-9db1-4463-997e-1ed50da17324\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.043025 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jmv6d"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.047858 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.071229 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.084891 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.088577 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.091790 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.101709 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.106888 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.116725 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qzmmz"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.119931 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.129967 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.144845 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2scbc"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.147081 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.164330 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" event={"ID":"d8d73604-754b-4dea-9be4-3451964e5589","Type":"ContainerStarted","Data":"14117bd130e68b0fc47a586d86a9175abdcd7dcd9d55c2e32081a4637f074f9e"} Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.170921 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.187253 4676 request.go:700] Waited for 1.885694613s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.190070 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" event={"ID":"b37a2847-a94a-4a0c-b092-1ed7155a2d35","Type":"ContainerStarted","Data":"63dfdd35be2732ab824c6cbb0405da330ff5fefbf093c8f9f68db2d40a451da2"} Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.211064 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm9zj\" (UniqueName: \"kubernetes.io/projected/aca1e4a5-f702-4803-8f47-7fcb8c7326b6-kube-api-access-mm9zj\") pod \"catalog-operator-68c6474976-2xb97\" (UID: \"aca1e4a5-f702-4803-8f47-7fcb8c7326b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.212869 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" event={"ID":"9c51c973-c370-41e8-b167-25d3b11418bf","Type":"ContainerStarted","Data":"9dea6e623cd16e538adb931ab7cfe35f12d462deb041c46eff49d5d7b18c7cde"} Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.217186 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7zllh"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.224702 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9350aae-3053-431e-a2d6-2137f990ca08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9h9cg\" (UID: \"c9350aae-3053-431e-a2d6-2137f990ca08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.244158 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gszgz\" (UniqueName: \"kubernetes.io/projected/c50241f8-d135-4df8-b047-e76fb28b8a3d-kube-api-access-gszgz\") pod \"olm-operator-6b444d44fb-2cgk2\" (UID: \"c50241f8-d135-4df8-b047-e76fb28b8a3d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.253016 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.277802 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.293902 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcwrj\" (UniqueName: \"kubernetes.io/projected/3651bf5e-f692-42b1-8d5e-512daae90cc8-kube-api-access-dcwrj\") pod \"machine-config-operator-74547568cd-6wjpz\" (UID: \"3651bf5e-f692-42b1-8d5e-512daae90cc8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.302984 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96br9\" (UniqueName: \"kubernetes.io/projected/25db14a1-725f-42ae-a6e9-646546b584c7-kube-api-access-96br9\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.309447 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6kdj\" (UniqueName: \"kubernetes.io/projected/9d605c02-a15d-46a8-942c-cd85e6ce5452-kube-api-access-h6kdj\") pod \"machine-config-controller-84d6567774-nhmlj\" (UID: \"9d605c02-a15d-46a8-942c-cd85e6ce5452\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.326500 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm4tl\" (UniqueName: \"kubernetes.io/projected/7af03157-ee92-4e72-a775-acaeabb73e65-kube-api-access-jm4tl\") pod \"marketplace-operator-79b997595-rsw66\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.345538 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxkrp\" (UniqueName: \"kubernetes.io/projected/4b605a66-7904-4596-a67f-ea21ef41a24b-kube-api-access-dxkrp\") pod \"downloads-7954f5f757-6m9lm\" (UID: \"4b605a66-7904-4596-a67f-ea21ef41a24b\") " pod="openshift-console/downloads-7954f5f757-6m9lm" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.367321 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29486880-rhzd2"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.370103 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkx8\" (UniqueName: \"kubernetes.io/projected/a19dcc7f-c3e9-4aa8-90dc-412550a8060f-kube-api-access-vmkx8\") pod \"migrator-59844c95c7-hjbqw\" (UID: \"a19dcc7f-c3e9-4aa8-90dc-412550a8060f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.387935 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2d89ff2-f919-459e-8089-5097aab0f4e2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sxjwn\" (UID: \"c2d89ff2-f919-459e-8089-5097aab0f4e2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.413159 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4bc9\" (UniqueName: \"kubernetes.io/projected/7b8f105b-569d-47f2-b564-a0830b010e31-kube-api-access-j4bc9\") pod \"router-default-5444994796-zq745\" (UID: \"7b8f105b-569d-47f2-b564-a0830b010e31\") " pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.425461 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6m9lm" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.428232 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x649d"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.436836 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvpn8\" (UniqueName: \"kubernetes.io/projected/fdb046fc-eba9-4f07-a1d1-2db71a2d46c1-kube-api-access-mvpn8\") pod \"packageserver-d55dfcdfc-r6r59\" (UID: \"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.439912 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.461239 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.463202 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg8sc\" (UniqueName: \"kubernetes.io/projected/596a60f1-3eb7-4a57-af4a-1cb1f37f2824-kube-api-access-vg8sc\") pod \"package-server-manager-789f6589d5-ljbnb\" (UID: \"596a60f1-3eb7-4a57-af4a-1cb1f37f2824\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.481829 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25db14a1-725f-42ae-a6e9-646546b584c7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wh9bw\" (UID: \"25db14a1-725f-42ae-a6e9-646546b584c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.492285 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvj8q\" (UniqueName: \"kubernetes.io/projected/1eabf562-d289-4685-8ee5-ed1525930d19-kube-api-access-gvj8q\") pod \"collect-profiles-29486880-7srrg\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.492653 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.501240 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.504734 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5n78\" (UniqueName: \"kubernetes.io/projected/4d53a745-f985-4659-b62c-ce297ce8ce85-kube-api-access-r5n78\") pod \"service-ca-9c57cc56f-6kx9j\" (UID: \"4d53a745-f985-4659-b62c-ce297ce8ce85\") " pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.507268 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.515967 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.524504 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.531760 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.532016 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr5xr\" (UniqueName: \"kubernetes.io/projected/8a35921c-db91-45c2-a92c-a42f5b2cab84-kube-api-access-gr5xr\") pod \"dns-operator-744455d44c-cdrbc\" (UID: \"8a35921c-db91-45c2-a92c-a42f5b2cab84\") " pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.538273 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.549151 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk6nf\" (UniqueName: \"kubernetes.io/projected/d8eed212-9137-45e5-8347-1f921fbedb19-kube-api-access-nk6nf\") pod \"multus-admission-controller-857f4d67dd-ck624\" (UID: \"d8eed212-9137-45e5-8347-1f921fbedb19\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.563580 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvhcs\" (UniqueName: \"kubernetes.io/projected/6d52fce7-049e-441e-8e40-15d044e0319a-kube-api-access-nvhcs\") pod \"cluster-samples-operator-665b6dd947-xt6kb\" (UID: \"6d52fce7-049e-441e-8e40-15d044e0319a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.570028 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.577371 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.581570 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f5e70d9-8b16-4684-bd98-4287ccbb6d85-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-975pl\" (UID: \"2f5e70d9-8b16-4684-bd98-4287ccbb6d85\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.600690 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlm94\" (UniqueName: \"kubernetes.io/projected/89435826-645d-48a2-aa3b-f5c42003dcbe-kube-api-access-tlm94\") pod \"oauth-openshift-558db77b4-kvcv8\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.603464 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.619103 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2smk"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.621173 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7srgg\" (UniqueName: \"kubernetes.io/projected/977bd2bf-e652-4b16-b8fc-902d4a1d7860-kube-api-access-7srgg\") pod \"kube-storage-version-migrator-operator-b67b599dd-4nnjp\" (UID: \"977bd2bf-e652-4b16-b8fc-902d4a1d7860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.627506 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.647174 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.667509 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.687305 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.706859 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.727201 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.732873 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.747266 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.748306 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.786783 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.787298 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.791317 4676 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.791764 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.807893 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.821080 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.825567 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5"] Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.846648 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.927943 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.931578 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.936465 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.936537 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-tls\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.936615 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9887557b-81eb-4651-8da2-fd34d7b0be97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.936650 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-bound-sa-token\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.936829 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-certificates\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.936905 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-trusted-ca\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.936975 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9887557b-81eb-4651-8da2-fd34d7b0be97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:52 crc kubenswrapper[4676]: I0124 00:05:52.937006 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlknz\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-kube-api-access-hlknz\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:52 crc kubenswrapper[4676]: E0124 00:05:52.941274 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.441250807 +0000 UTC m=+137.471222048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.039835 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.040099 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.540069631 +0000 UTC m=+137.570040672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040408 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-config\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040459 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-bound-sa-token\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040481 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c752b5-1413-4c4e-aea6-5302a7c69467-cert\") pod \"ingress-canary-spx5f\" (UID: \"f5c752b5-1413-4c4e-aea6-5302a7c69467\") " pod="openshift-ingress-canary/ingress-canary-spx5f" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040502 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040535 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fv6m\" (UniqueName: \"kubernetes.io/projected/f53afe6a-307c-4b0d-88cb-596703f35f8a-kube-api-access-4fv6m\") pod \"control-plane-machine-set-operator-78cbb6b69f-mkwj8\" (UID: \"f53afe6a-307c-4b0d-88cb-596703f35f8a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040562 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl8ks\" (UniqueName: \"kubernetes.io/projected/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-kube-api-access-cl8ks\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040589 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6h9j\" (UniqueName: \"kubernetes.io/projected/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-kube-api-access-l6h9j\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040606 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-certs\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040635 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f53afe6a-307c-4b0d-88cb-596703f35f8a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mkwj8\" (UID: \"f53afe6a-307c-4b0d-88cb-596703f35f8a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040656 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040695 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-ca\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040734 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-client\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040767 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-certificates\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040807 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-trusted-ca\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040836 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9887557b-81eb-4651-8da2-fd34d7b0be97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040873 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlknz\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-kube-api-access-hlknz\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040918 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tsj\" (UniqueName: \"kubernetes.io/projected/f5c752b5-1413-4c4e-aea6-5302a7c69467-kube-api-access-27tsj\") pod \"ingress-canary-spx5f\" (UID: \"f5c752b5-1413-4c4e-aea6-5302a7c69467\") " pod="openshift-ingress-canary/ingress-canary-spx5f" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.040996 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959hq\" (UniqueName: \"kubernetes.io/projected/ae793283-426a-4e02-b96b-89c3f16f2d16-kube-api-access-959hq\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.041032 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.041055 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae793283-426a-4e02-b96b-89c3f16f2d16-serving-cert\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.041070 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-service-ca\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.041091 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-tls\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.041109 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-node-bootstrap-token\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.041144 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9887557b-81eb-4651-8da2-fd34d7b0be97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.041572 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9887557b-81eb-4651-8da2-fd34d7b0be97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.045294 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-trusted-ca\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.046144 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.546122725 +0000 UTC m=+137.576093926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.054343 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-tls\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.060515 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ft4kq"] Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.063229 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-certificates\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.065679 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9887557b-81eb-4651-8da2-fd34d7b0be97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.070276 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj"] Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.071170 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-bound-sa-token\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.082315 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlknz\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-kube-api-access-hlknz\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.096334 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg"] Jan 24 00:05:53 crc kubenswrapper[4676]: W0124 00:05:53.127896 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6351c23_e315_4c92_a467_380da403d3c4.slice/crio-c3711f3be897a53e7f5ed62f4dfea1182824530d785a6c28ea3f9882ad5f65da WatchSource:0}: Error finding container c3711f3be897a53e7f5ed62f4dfea1182824530d785a6c28ea3f9882ad5f65da: Status 404 returned error can't find the container with id c3711f3be897a53e7f5ed62f4dfea1182824530d785a6c28ea3f9882ad5f65da Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142100 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142348 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c752b5-1413-4c4e-aea6-5302a7c69467-cert\") pod \"ingress-canary-spx5f\" (UID: \"f5c752b5-1413-4c4e-aea6-5302a7c69467\") " pod="openshift-ingress-canary/ingress-canary-spx5f" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142441 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142544 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fv6m\" (UniqueName: \"kubernetes.io/projected/f53afe6a-307c-4b0d-88cb-596703f35f8a-kube-api-access-4fv6m\") pod \"control-plane-machine-set-operator-78cbb6b69f-mkwj8\" (UID: \"f53afe6a-307c-4b0d-88cb-596703f35f8a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142600 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl8ks\" (UniqueName: \"kubernetes.io/projected/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-kube-api-access-cl8ks\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142634 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6h9j\" (UniqueName: \"kubernetes.io/projected/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-kube-api-access-l6h9j\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142664 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-registration-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142692 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-certs\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142725 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f53afe6a-307c-4b0d-88cb-596703f35f8a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mkwj8\" (UID: \"f53afe6a-307c-4b0d-88cb-596703f35f8a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142752 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-config\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142797 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-mountpoint-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142821 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142853 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-ca\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.142942 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-client\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.143033 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.642992486 +0000 UTC m=+137.672963487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143364 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2tp\" (UniqueName: \"kubernetes.io/projected/290fd523-8c24-458a-8abb-e32ca43caae1-kube-api-access-qs2tp\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143441 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-config-volume\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143480 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-socket-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143510 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tsj\" (UniqueName: \"kubernetes.io/projected/f5c752b5-1413-4c4e-aea6-5302a7c69467-kube-api-access-27tsj\") pod \"ingress-canary-spx5f\" (UID: \"f5c752b5-1413-4c4e-aea6-5302a7c69467\") " pod="openshift-ingress-canary/ingress-canary-spx5f" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143615 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-csi-data-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143652 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959hq\" (UniqueName: \"kubernetes.io/projected/ae793283-426a-4e02-b96b-89c3f16f2d16-kube-api-access-959hq\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143683 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143708 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpntf\" (UniqueName: \"kubernetes.io/projected/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-kube-api-access-xpntf\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143732 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-plugins-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143753 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-serving-cert\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143777 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g949x\" (UniqueName: \"kubernetes.io/projected/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-kube-api-access-g949x\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143814 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae793283-426a-4e02-b96b-89c3f16f2d16-serving-cert\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143834 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-service-ca\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143871 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-node-bootstrap-token\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.143978 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-metrics-tls\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.144034 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-config\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.144677 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-config\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.146495 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.152774 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f53afe6a-307c-4b0d-88cb-596703f35f8a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mkwj8\" (UID: \"f53afe6a-307c-4b0d-88cb-596703f35f8a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.153398 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-service-ca\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.155159 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.655140856 +0000 UTC m=+137.685111857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.159311 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.159416 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-client\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.164462 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-certs\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.167323 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae793283-426a-4e02-b96b-89c3f16f2d16-serving-cert\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.168811 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae793283-426a-4e02-b96b-89c3f16f2d16-etcd-ca\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.168856 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5c752b5-1413-4c4e-aea6-5302a7c69467-cert\") pod \"ingress-canary-spx5f\" (UID: \"f5c752b5-1413-4c4e-aea6-5302a7c69467\") " pod="openshift-ingress-canary/ingress-canary-spx5f" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.194915 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fv6m\" (UniqueName: \"kubernetes.io/projected/f53afe6a-307c-4b0d-88cb-596703f35f8a-kube-api-access-4fv6m\") pod \"control-plane-machine-set-operator-78cbb6b69f-mkwj8\" (UID: \"f53afe6a-307c-4b0d-88cb-596703f35f8a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.201619 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-node-bootstrap-token\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.207627 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl8ks\" (UniqueName: \"kubernetes.io/projected/3e6f77cf-0bf1-4e28-a894-1ca5ee320c58-kube-api-access-cl8ks\") pod \"machine-config-server-lgg5l\" (UID: \"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58\") " pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.214042 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lgg5l" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.246490 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6h9j\" (UniqueName: \"kubernetes.io/projected/e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106-kube-api-access-l6h9j\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8ljj\" (UID: \"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.246661 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.246875 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2tp\" (UniqueName: \"kubernetes.io/projected/290fd523-8c24-458a-8abb-e32ca43caae1-kube-api-access-qs2tp\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.246901 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-config-volume\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.246919 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-socket-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.246954 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-csi-data-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.246996 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.746968816 +0000 UTC m=+137.776939817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247037 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247052 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-csi-data-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247071 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpntf\" (UniqueName: \"kubernetes.io/projected/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-kube-api-access-xpntf\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247109 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-plugins-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247140 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-serving-cert\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247162 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g949x\" (UniqueName: \"kubernetes.io/projected/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-kube-api-access-g949x\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247238 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-metrics-tls\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247319 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-registration-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247518 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-config\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247546 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-mountpoint-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.247712 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-mountpoint-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.248023 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.748015728 +0000 UTC m=+137.777986729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.248608 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-plugins-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.249180 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-socket-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.249244 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/290fd523-8c24-458a-8abb-e32ca43caae1-registration-dir\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.249537 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-config\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.250032 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-config-volume\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.255816 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-serving-cert\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.271996 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tsj\" (UniqueName: \"kubernetes.io/projected/f5c752b5-1413-4c4e-aea6-5302a7c69467-kube-api-access-27tsj\") pod \"ingress-canary-spx5f\" (UID: \"f5c752b5-1413-4c4e-aea6-5302a7c69467\") " pod="openshift-ingress-canary/ingress-canary-spx5f" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.272536 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-metrics-tls\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.304768 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2tp\" (UniqueName: \"kubernetes.io/projected/290fd523-8c24-458a-8abb-e32ca43caae1-kube-api-access-qs2tp\") pod \"csi-hostpathplugin-64phz\" (UID: \"290fd523-8c24-458a-8abb-e32ca43caae1\") " pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.331703 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959hq\" (UniqueName: \"kubernetes.io/projected/ae793283-426a-4e02-b96b-89c3f16f2d16-kube-api-access-959hq\") pod \"etcd-operator-b45778765-wvvpg\" (UID: \"ae793283-426a-4e02-b96b-89c3f16f2d16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.333307 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" event={"ID":"9c51c973-c370-41e8-b167-25d3b11418bf","Type":"ContainerStarted","Data":"6a43e179640e1b1f56bcefa6613ceeea5c74f898c5a73adf794aa37a8b6aca24"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.334504 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" event={"ID":"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93","Type":"ContainerStarted","Data":"caafeb67c7aefa20810226a8341333cb55d2bc2743763b95b6989d1d99cdf707"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.343623 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" event={"ID":"b37a2847-a94a-4a0c-b092-1ed7155a2d35","Type":"ContainerStarted","Data":"75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.347254 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g949x\" (UniqueName: \"kubernetes.io/projected/bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c-kube-api-access-g949x\") pod \"service-ca-operator-777779d784-n8shl\" (UID: \"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.347680 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.348162 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.348442 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.848430874 +0000 UTC m=+137.878401875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.348922 4676 generic.go:334] "Generic (PLEG): container finished" podID="2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e" containerID="9dbbda8f061aa866b6e59cc247c2b8e94fec1c21500f03dbd015022a26e987fe" exitCode=0 Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.348971 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" event={"ID":"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e","Type":"ContainerDied","Data":"9dbbda8f061aa866b6e59cc247c2b8e94fec1c21500f03dbd015022a26e987fe"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.348988 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" event={"ID":"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e","Type":"ContainerStarted","Data":"3c4296fbfe84777bd64c3d2c2c56a32e0594f1f75477226e668665f31814a7fc"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.351679 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x649d" event={"ID":"32de8698-4bd5-4154-92a3-76930504a72d","Type":"ContainerStarted","Data":"7d8f67156c95428502531e2d6d7b0d4ff2b03081bb36d385a3065839f459af50"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.354401 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts"] Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.356589 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2smk" event={"ID":"5cce043a-2f1b-4f48-967e-c48a00cfe1a6","Type":"ContainerStarted","Data":"8150da0c7bf43ea052c8b9cb2dfefad3eaa9d5a8d1321ad37559f3b798f3d407"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.357153 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.358968 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" event={"ID":"35efd97c-0521-428a-896d-b67490207db5","Type":"ContainerStarted","Data":"103da22d5d7ad80007377bf939401d399bf34ac726213392b0c3e7e78408e88d"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.361872 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" event={"ID":"b189330a-ee63-45f1-8104-4ef173f8ee22","Type":"ContainerStarted","Data":"afd5824e56e3451ab4222828d22b801d69c5d0db9f13ee565fd30c617f1539d4"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.362176 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.363138 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" event={"ID":"e6351c23-e315-4c92-a467-380da403d3c4","Type":"ContainerStarted","Data":"c3711f3be897a53e7f5ed62f4dfea1182824530d785a6c28ea3f9882ad5f65da"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.366582 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" event={"ID":"f8911791-9db1-4463-997e-1ed50da17324","Type":"ContainerStarted","Data":"f3047891deacdaae65ba3e246560c96a6eba06db347aa1943104324a7818b958"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.368351 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" event={"ID":"d8d73604-754b-4dea-9be4-3451964e5589","Type":"ContainerStarted","Data":"ea4ba8929043adf0f17c34319734032d9b4653690dd2e7b618d89cf2bc0f0d30"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.368939 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb"] Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.382245 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpntf\" (UniqueName: \"kubernetes.io/projected/ce7f5025-ebd8-4cbf-af5a-460fbeb681d1-kube-api-access-xpntf\") pod \"dns-default-w7sr4\" (UID: \"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1\") " pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.388504 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" event={"ID":"c9350aae-3053-431e-a2d6-2137f990ca08","Type":"ContainerStarted","Data":"ffa14e1ba61aa831a4df4052a4b8a4a117f7159bef87e57ef471ae2cb78750ae"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.389515 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.389922 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29486880-rhzd2" event={"ID":"00beace7-1e83-40ed-8d92-6da0cae7817f","Type":"ContainerStarted","Data":"79a71111cca31e12c0d06b9a6d103743df145b9a3918dbe29c65faddc57798a0"} Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.449516 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.455018 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:53.955000006 +0000 UTC m=+137.984971007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.494043 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.521548 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.533985 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-spx5f" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.548307 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w7sr4" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.551619 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.552189 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.052172397 +0000 UTC m=+138.082143398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.557911 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-64phz" Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.578363 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97"] Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.669160 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.669666 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.169647098 +0000 UTC m=+138.199618099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.770030 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.770147 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.270123305 +0000 UTC m=+138.300094306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.770575 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.770893 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.27088195 +0000 UTC m=+138.300852951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.792880 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp"] Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.876872 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.877349 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.377331268 +0000 UTC m=+138.407302269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.892526 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz"] Jan 24 00:05:53 crc kubenswrapper[4676]: I0124 00:05:53.980844 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:53 crc kubenswrapper[4676]: E0124 00:05:53.981336 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.481318677 +0000 UTC m=+138.511289678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.041204 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" podStartSLOduration=119.041189654 podStartE2EDuration="1m59.041189654s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:54.008913871 +0000 UTC m=+138.038884872" watchObservedRunningTime="2026-01-24 00:05:54.041189654 +0000 UTC m=+138.071160655" Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.084112 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.084741 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.584726038 +0000 UTC m=+138.614697039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.186516 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.187105 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.687083995 +0000 UTC m=+138.717054996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: W0124 00:05:54.203256 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3651bf5e_f692_42b1_8d5e_512daae90cc8.slice/crio-4e0c8c40649393209c1a7f0a24f346edca73c2442af1cad768738ca55f1542e8 WatchSource:0}: Error finding container 4e0c8c40649393209c1a7f0a24f346edca73c2442af1cad768738ca55f1542e8: Status 404 returned error can't find the container with id 4e0c8c40649393209c1a7f0a24f346edca73c2442af1cad768738ca55f1542e8 Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.292544 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.292820 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.792799439 +0000 UTC m=+138.822770430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.399712 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.400328 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:54.900309102 +0000 UTC m=+138.930280103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.428455 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg"] Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.432113 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ck624"] Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.473738 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29486880-rhzd2" event={"ID":"00beace7-1e83-40ed-8d92-6da0cae7817f","Type":"ContainerStarted","Data":"e742b4dbf395eb8d16eb17141075162ea559f552eabe8302d3372acaca4a7557"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.499998 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" event={"ID":"f8911791-9db1-4463-997e-1ed50da17324","Type":"ContainerStarted","Data":"6d77a0eba3b135c30f1d01ca92806fe520c076b75330c973f7a9495533b64ef2"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.516471 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.519903 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.01988567 +0000 UTC m=+139.049856671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.526815 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x649d" event={"ID":"32de8698-4bd5-4154-92a3-76930504a72d","Type":"ContainerStarted","Data":"80c19b748f05fd317c9de6c496723038557369b959df5abdea7dec9534b0ced3"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.529673 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.531522 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" event={"ID":"3651bf5e-f692-42b1-8d5e-512daae90cc8","Type":"ContainerStarted","Data":"4e0c8c40649393209c1a7f0a24f346edca73c2442af1cad768738ca55f1542e8"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.555506 4676 generic.go:334] "Generic (PLEG): container finished" podID="35efd97c-0521-428a-896d-b67490207db5" containerID="cb84fc2eb2192156ab29e32f51c5a596bfedf3e62bd0fd8aad07a1f0e3905771" exitCode=0 Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.555845 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" event={"ID":"35efd97c-0521-428a-896d-b67490207db5","Type":"ContainerDied","Data":"cb84fc2eb2192156ab29e32f51c5a596bfedf3e62bd0fd8aad07a1f0e3905771"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.563954 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lgg5l" event={"ID":"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58","Type":"ContainerStarted","Data":"9188b32c88f888426672bc70a99ad5ed649d82e234b0affffed445951bc666a8"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.575907 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvcv8"] Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.624236 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.624856 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.124841611 +0000 UTC m=+139.154812612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.629888 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" event={"ID":"e6351c23-e315-4c92-a467-380da403d3c4","Type":"ContainerStarted","Data":"b6d42d72931ba91bb49ebed4a06d7eef99b1342874b72defa107734f18d238c3"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.637638 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zq745" event={"ID":"7b8f105b-569d-47f2-b564-a0830b010e31","Type":"ContainerStarted","Data":"641343ee68c7d1d57fd397177fd2fd0c1022848fbb6f04704f067ece91ba653b"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.646952 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" event={"ID":"a6360940-ea9d-456d-b546-5a20af404ee5","Type":"ContainerStarted","Data":"779034b47204771a9d445a3c321ee1593f3424dcefc08713ca39e4e19c631ec9"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.649980 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" event={"ID":"aca1e4a5-f702-4803-8f47-7fcb8c7326b6","Type":"ContainerStarted","Data":"f1d0928ddd80044d3a8b61c0eea7a74cedd17dae8f5c908b3c3e078463e661bb"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.650961 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" event={"ID":"977bd2bf-e652-4b16-b8fc-902d4a1d7860","Type":"ContainerStarted","Data":"84f7d28879a217d190a75a520a829f44527ba66ca7e5a7b050285b27c0e379b6"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.662033 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" event={"ID":"9c51c973-c370-41e8-b167-25d3b11418bf","Type":"ContainerStarted","Data":"90417dabd733ff8898e20e70430eced484e20a837f5ba107a83119c85c8c4fc6"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.684338 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" event={"ID":"596a60f1-3eb7-4a57-af4a-1cb1f37f2824","Type":"ContainerStarted","Data":"9467c263be7bed92eb44fc25114a903bc926ef051c31a985c99f3cfd00a7b7ad"} Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.725263 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.726395 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.226361261 +0000 UTC m=+139.256332262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.822242 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29486880-rhzd2" podStartSLOduration=119.8222269 podStartE2EDuration="1m59.8222269s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:54.818856652 +0000 UTC m=+138.848827653" watchObservedRunningTime="2026-01-24 00:05:54.8222269 +0000 UTC m=+138.852197901" Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.829987 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.834836 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.334821944 +0000 UTC m=+139.364792945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.866364 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-x649d" podStartSLOduration=119.866345092 podStartE2EDuration="1m59.866345092s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:54.864480073 +0000 UTC m=+138.894451074" watchObservedRunningTime="2026-01-24 00:05:54.866345092 +0000 UTC m=+138.896316093" Jan 24 00:05:54 crc kubenswrapper[4676]: W0124 00:05:54.884624 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8eed212_9137_45e5_8347_1f921fbedb19.slice/crio-7740d756e063c56b647f0d283026bb4d3d2312093aad751bb62487f9b3b5e9ab WatchSource:0}: Error finding container 7740d756e063c56b647f0d283026bb4d3d2312093aad751bb62487f9b3b5e9ab: Status 404 returned error can't find the container with id 7740d756e063c56b647f0d283026bb4d3d2312093aad751bb62487f9b3b5e9ab Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.898863 4676 patch_prober.go:28] interesting pod/console-operator-58897d9998-x649d container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.898911 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x649d" podUID="32de8698-4bd5-4154-92a3-76930504a72d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.931598 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:54 crc kubenswrapper[4676]: E0124 00:05:54.932124 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.432110799 +0000 UTC m=+139.462081800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.965589 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" podStartSLOduration=119.96557406 podStartE2EDuration="1m59.96557406s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:54.963275696 +0000 UTC m=+138.993246697" watchObservedRunningTime="2026-01-24 00:05:54.96557406 +0000 UTC m=+138.995545061" Jan 24 00:05:54 crc kubenswrapper[4676]: I0124 00:05:54.966202 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jmv6d" podStartSLOduration=118.96619833 podStartE2EDuration="1m58.96619833s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:54.922367156 +0000 UTC m=+138.952338157" watchObservedRunningTime="2026-01-24 00:05:54.96619833 +0000 UTC m=+138.996169331" Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.009254 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkpbj" podStartSLOduration=120.009239638 podStartE2EDuration="2m0.009239638s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:55.008164534 +0000 UTC m=+139.038135535" watchObservedRunningTime="2026-01-24 00:05:55.009239638 +0000 UTC m=+139.039210629" Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.035073 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.035458 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.535445107 +0000 UTC m=+139.565416108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.137504 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.137900 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.637884976 +0000 UTC m=+139.667855977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.253136 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.253717 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.753706805 +0000 UTC m=+139.783677806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.354217 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.354561 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.854538593 +0000 UTC m=+139.884509594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.466605 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.466963 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:55.966946592 +0000 UTC m=+139.996917583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.573867 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.574351 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.074333101 +0000 UTC m=+140.104304112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.691863 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.692394 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.192352658 +0000 UTC m=+140.222323659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.734273 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" event={"ID":"d8eed212-9137-45e5-8347-1f921fbedb19","Type":"ContainerStarted","Data":"7740d756e063c56b647f0d283026bb4d3d2312093aad751bb62487f9b3b5e9ab"} Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.778771 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rsw66"] Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.792805 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.793422 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.293405754 +0000 UTC m=+140.323376755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.809831 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" event={"ID":"1eabf562-d289-4685-8ee5-ed1525930d19","Type":"ContainerStarted","Data":"c2eca52027c2d29df7bf8bb00c816d40e324cb64a398fb91764ca8dab5cbaa40"} Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.828206 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" event={"ID":"89435826-645d-48a2-aa3b-f5c42003dcbe","Type":"ContainerStarted","Data":"f8acee532eb12a14f5c8f001c0b4e4969213cd1ade4bba8504365d2ac03f9941"} Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.878582 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" event={"ID":"3651bf5e-f692-42b1-8d5e-512daae90cc8","Type":"ContainerStarted","Data":"162f024bbaf084d516b6f0091d3aba51b6321f7dbfa6359ef5f51639f06926ca"} Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.898833 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:55 crc kubenswrapper[4676]: E0124 00:05:55.899371 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.399357236 +0000 UTC m=+140.429328237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:55 crc kubenswrapper[4676]: W0124 00:05:55.900245 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af03157_ee92_4e72_a775_acaeabb73e65.slice/crio-3e3f107eb805996a69d5ec1a562a4e16c0b2393227f3f9858f21b04b163f17b3 WatchSource:0}: Error finding container 3e3f107eb805996a69d5ec1a562a4e16c0b2393227f3f9858f21b04b163f17b3: Status 404 returned error can't find the container with id 3e3f107eb805996a69d5ec1a562a4e16c0b2393227f3f9858f21b04b163f17b3 Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.938512 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj"] Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.938555 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb"] Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.938566 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw"] Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.975619 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" event={"ID":"c9350aae-3053-431e-a2d6-2137f990ca08","Type":"ContainerStarted","Data":"ec87b02f17531426f066db608cc81f86442cab14a504474156fb8369b08125a4"} Jan 24 00:05:55 crc kubenswrapper[4676]: I0124 00:05:55.999528 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:55.999876 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.499787522 +0000 UTC m=+140.529758523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.000018 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.001529 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.501512797 +0000 UTC m=+140.531483798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.022503 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9h9cg" podStartSLOduration=120.022485158 podStartE2EDuration="2m0.022485158s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:56.022391715 +0000 UTC m=+140.052362716" watchObservedRunningTime="2026-01-24 00:05:56.022485158 +0000 UTC m=+140.052456159" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.048805 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2smk" event={"ID":"5cce043a-2f1b-4f48-967e-c48a00cfe1a6","Type":"ContainerStarted","Data":"c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250"} Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.106785 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.108018 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.608002757 +0000 UTC m=+140.637973758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.111110 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" event={"ID":"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93","Type":"ContainerStarted","Data":"f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa"} Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.112342 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.135843 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lgg5l" event={"ID":"3e6f77cf-0bf1-4e28-a894-1ca5ee320c58","Type":"ContainerStarted","Data":"c2145c8355a8ea9f0cc532946b4eee599349e33eafdfdf5d910f766bb5ccd739"} Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.162930 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-g2smk" podStartSLOduration=121.162912705 podStartE2EDuration="2m1.162912705s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:56.110943371 +0000 UTC m=+140.140914362" watchObservedRunningTime="2026-01-24 00:05:56.162912705 +0000 UTC m=+140.192883706" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.163960 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zq745" event={"ID":"7b8f105b-569d-47f2-b564-a0830b010e31","Type":"ContainerStarted","Data":"f0e0c89ffb6bc6404307026b50374c229b53942e5e31b8b6c5141753e3ea8157"} Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.188063 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.209000 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.213701 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.71368652 +0000 UTC m=+140.743657521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.242897 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" podStartSLOduration=120.242424651 podStartE2EDuration="2m0.242424651s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:56.167713789 +0000 UTC m=+140.197684800" watchObservedRunningTime="2026-01-24 00:05:56.242424651 +0000 UTC m=+140.272395652" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.243507 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.281198 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zq745" podStartSLOduration=120.281179581 podStartE2EDuration="2m0.281179581s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:56.274752885 +0000 UTC m=+140.304723886" watchObservedRunningTime="2026-01-24 00:05:56.281179581 +0000 UTC m=+140.311150582" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.299748 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6m9lm"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.303536 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.315289 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.315635 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.815620974 +0000 UTC m=+140.845591975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.350429 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6kx9j"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.350475 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.355184 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.359441 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-64phz"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.359505 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.359792 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lgg5l" podStartSLOduration=7.359778238 podStartE2EDuration="7.359778238s" podCreationTimestamp="2026-01-24 00:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:56.345747969 +0000 UTC m=+140.375718970" watchObservedRunningTime="2026-01-24 00:05:56.359778238 +0000 UTC m=+140.389749239" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.433900 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cdrbc"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.434349 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.434843 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:56.93481125 +0000 UTC m=+140.964782251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: W0124 00:05:56.460207 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b605a66_7904_4596_a67f_ea21ef41a24b.slice/crio-7ba26b93e116e90166b29605ba52eddeffaddd534a641c9088f052d3d5af4bb7 WatchSource:0}: Error finding container 7ba26b93e116e90166b29605ba52eddeffaddd534a641c9088f052d3d5af4bb7: Status 404 returned error can't find the container with id 7ba26b93e116e90166b29605ba52eddeffaddd534a641c9088f052d3d5af4bb7 Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.491749 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.511620 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.535013 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.535552 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.035532585 +0000 UTC m=+141.065503586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.596611 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:05:56 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:05:56 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:05:56 crc kubenswrapper[4676]: healthz check failed Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.596665 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.640344 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.640773 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.140752134 +0000 UTC m=+141.170723135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.694603 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wvvpg"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.704758 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-spx5f"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.720421 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w7sr4"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.742633 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.743299 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.243278226 +0000 UTC m=+141.273249227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.744684 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.745034 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.245022802 +0000 UTC m=+141.274993803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.847189 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.848045 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.34801063 +0000 UTC m=+141.377981631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.861455 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.864898 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n8shl"] Jan 24 00:05:56 crc kubenswrapper[4676]: I0124 00:05:56.951812 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:56 crc kubenswrapper[4676]: E0124 00:05:56.952483 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.452469453 +0000 UTC m=+141.482440464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.053151 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.053426 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.553371164 +0000 UTC m=+141.583342165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.053495 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.054015 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.553998354 +0000 UTC m=+141.583969355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.156813 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.157127 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.657112156 +0000 UTC m=+141.687083157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.191448 4676 patch_prober.go:28] interesting pod/console-operator-58897d9998-x649d container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.191661 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x649d" podUID="32de8698-4bd5-4154-92a3-76930504a72d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.230596 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" event={"ID":"c2d89ff2-f919-459e-8089-5097aab0f4e2","Type":"ContainerStarted","Data":"0f1131220020d523487c82d70998590e4f63179666319083be7e8c7b5cee3464"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.231428 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-64phz" event={"ID":"290fd523-8c24-458a-8abb-e32ca43caae1","Type":"ContainerStarted","Data":"3fafcb574bc33db2d01bed4dace2b1f846df351963679777f787fad266f7f8bf"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.235599 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" podStartSLOduration=122.235587278 podStartE2EDuration="2m2.235587278s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:57.187424886 +0000 UTC m=+141.217395887" watchObservedRunningTime="2026-01-24 00:05:57.235587278 +0000 UTC m=+141.265558279" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.245876 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" event={"ID":"d8eed212-9137-45e5-8347-1f921fbedb19","Type":"ContainerStarted","Data":"4501b68358fcadcb7cb93da658cb65fc5efa631286b740ec98d8303202c21d47"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.258714 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" event={"ID":"1eabf562-d289-4685-8ee5-ed1525930d19","Type":"ContainerStarted","Data":"572cf18d919a142df3a87fe72606c221700742f64f3b5868d79f863d9965c925"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.267390 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.268040 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.768023257 +0000 UTC m=+141.797994258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.280010 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6m9lm" event={"ID":"4b605a66-7904-4596-a67f-ea21ef41a24b","Type":"ContainerStarted","Data":"7ba26b93e116e90166b29605ba52eddeffaddd534a641c9088f052d3d5af4bb7"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.342088 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" event={"ID":"2f5e70d9-8b16-4684-bd98-4287ccbb6d85","Type":"ContainerStarted","Data":"b2f2d058d7135dc12e594df602d1825fd29a7c58382601f59aebaba322475de2"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.372869 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.373090 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.873036139 +0000 UTC m=+141.903007140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.373343 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.382680 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.882637517 +0000 UTC m=+141.912608518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.477899 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" event={"ID":"8a35921c-db91-45c2-a92c-a42f5b2cab84","Type":"ContainerStarted","Data":"ad68b7c4f8bc12a5320c1bd7f232e02f7a74e0d26a3956a4106d3113d7e10c28"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.478892 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.479295 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:57.979279381 +0000 UTC m=+142.009250382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.506643 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w7sr4" event={"ID":"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1","Type":"ContainerStarted","Data":"3132e83fa488365d9b966d87873edaee96ef6f7c13ad33522cfa1f661dff9e6d"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.515003 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:05:57 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:05:57 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:05:57 crc kubenswrapper[4676]: healthz check failed Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.515044 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.531962 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" podStartSLOduration=122.531943726 podStartE2EDuration="2m2.531943726s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:57.515251762 +0000 UTC m=+141.545222763" watchObservedRunningTime="2026-01-24 00:05:57.531943726 +0000 UTC m=+141.561914717" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.568502 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" event={"ID":"596a60f1-3eb7-4a57-af4a-1cb1f37f2824","Type":"ContainerStarted","Data":"6891fc7226cd73cf2e1d9619ceeaabb04e9b8a541cc964a28a7cdd2d1bc78b99"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.568605 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" event={"ID":"596a60f1-3eb7-4a57-af4a-1cb1f37f2824","Type":"ContainerStarted","Data":"b2f493a2e53381d0f797ca12d2731084d1a59ad8cb2c8880d1dee06227893110"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.568669 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.575564 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" event={"ID":"f53afe6a-307c-4b0d-88cb-596703f35f8a","Type":"ContainerStarted","Data":"29d4f1cd87522474fa72d6fb43bcf59c68668dcfd449d06b68dacbfd7f8808ea"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.579862 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.580363 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.080346566 +0000 UTC m=+142.110317567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.592481 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" event={"ID":"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c","Type":"ContainerStarted","Data":"cccadc46c7ef5e3530762fa3243bf0c39b1ea45af73bc9c148e8124f6008387e"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.608110 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" event={"ID":"6d52fce7-049e-441e-8e40-15d044e0319a","Type":"ContainerStarted","Data":"c1bd4c33800e8e92c124a2e705d048651d74ac4490e77bf5314817df6b90d826"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.609771 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" event={"ID":"9d605c02-a15d-46a8-942c-cd85e6ce5452","Type":"ContainerStarted","Data":"bbad16bca0b94c1ecbe3635f858a4da358aad4f756468a9e54016dfc3ee83a53"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.609805 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" event={"ID":"9d605c02-a15d-46a8-942c-cd85e6ce5452","Type":"ContainerStarted","Data":"a59bf3644ff0278dd14e9e4d77ad085e2b7e194b4685e749bf43ab39ae4b8436"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.614482 4676 generic.go:334] "Generic (PLEG): container finished" podID="b189330a-ee63-45f1-8104-4ef173f8ee22" containerID="41953255b89b39f1f003fa2f1dffd6a3957e363ae5d27c9fdb07f020bbead58e" exitCode=0 Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.614542 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" event={"ID":"b189330a-ee63-45f1-8104-4ef173f8ee22","Type":"ContainerDied","Data":"41953255b89b39f1f003fa2f1dffd6a3957e363ae5d27c9fdb07f020bbead58e"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.617857 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" event={"ID":"aca1e4a5-f702-4803-8f47-7fcb8c7326b6","Type":"ContainerStarted","Data":"6172309ae544f7e1556e78a727a310de7dc37c888a8ab8fcbd64f2b2e7c96e2c"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.618658 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.628252 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" event={"ID":"4d53a745-f985-4659-b62c-ce297ce8ce85","Type":"ContainerStarted","Data":"94b32603a4403f69fddee08fd9a7547f2bbb94fb9b6010fd0ea33f0a8816842b"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.629745 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" event={"ID":"a19dcc7f-c3e9-4aa8-90dc-412550a8060f","Type":"ContainerStarted","Data":"fd121e832fcca06a108857f26f6893e5e3f2b814ea7cb9a7642bb3889987d555"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.631030 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ml6vn" event={"ID":"d8d73604-754b-4dea-9be4-3451964e5589","Type":"ContainerStarted","Data":"8618f4f528e7015470a790ec69dacfcd1f2c24b04abdd0eb3ba394b12d98b2e8"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.641805 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" podStartSLOduration=121.641794514 podStartE2EDuration="2m1.641794514s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:57.639847092 +0000 UTC m=+141.669818093" watchObservedRunningTime="2026-01-24 00:05:57.641794514 +0000 UTC m=+141.671765515" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.686760 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.687935 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.187918591 +0000 UTC m=+142.217889592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.728682 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" event={"ID":"7af03157-ee92-4e72-a775-acaeabb73e65","Type":"ContainerStarted","Data":"a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.728729 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" event={"ID":"7af03157-ee92-4e72-a775-acaeabb73e65","Type":"ContainerStarted","Data":"3e3f107eb805996a69d5ec1a562a4e16c0b2393227f3f9858f21b04b163f17b3"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.729630 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.733199 4676 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rsw66 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.733236 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" podUID="7af03157-ee92-4e72-a775-acaeabb73e65" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.737562 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.749926 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" event={"ID":"25db14a1-725f-42ae-a6e9-646546b584c7","Type":"ContainerStarted","Data":"77972cd045074fba9945778f0832daba3c7a3289379524b5266dd7b755125f15"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.749970 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" event={"ID":"25db14a1-725f-42ae-a6e9-646546b584c7","Type":"ContainerStarted","Data":"d625dcfd007adab1bc03e3fcafd6290f06373efc93d7ae96c3360b773ce74b50"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.788777 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" event={"ID":"35efd97c-0521-428a-896d-b67490207db5","Type":"ContainerStarted","Data":"67c42cb9b12c5cf6127553a26716e03bb1dcb2af2b15485e570076ffaf8c3212"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.788912 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.789187 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.289173102 +0000 UTC m=+142.319144103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.790257 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.795216 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" podStartSLOduration=121.795206765 podStartE2EDuration="2m1.795206765s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:57.732113646 +0000 UTC m=+141.762084647" watchObservedRunningTime="2026-01-24 00:05:57.795206765 +0000 UTC m=+141.825177766" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.820202 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" event={"ID":"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e","Type":"ContainerStarted","Data":"faadd71331233346ce0e663f4ccb524437c06a15ad83d6f211042ba07fc44943"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.820245 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" event={"ID":"2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e","Type":"ContainerStarted","Data":"011dff1c3fe125abd38bd940ca07e57dcd72cc95d3be80cb73367df38ffc62bb"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.834207 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" event={"ID":"a6360940-ea9d-456d-b546-5a20af404ee5","Type":"ContainerStarted","Data":"05f13459a2f3b055c50ed252a99d98ca75b4054ee677300bd00fe6e1ff08d44a"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.856803 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2xb97" podStartSLOduration=121.856784177 podStartE2EDuration="2m1.856784177s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:57.797130098 +0000 UTC m=+141.827101099" watchObservedRunningTime="2026-01-24 00:05:57.856784177 +0000 UTC m=+141.886755178" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.868674 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" podStartSLOduration=122.868651087 podStartE2EDuration="2m2.868651087s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:57.851819418 +0000 UTC m=+141.881790429" watchObservedRunningTime="2026-01-24 00:05:57.868651087 +0000 UTC m=+141.898622088" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.882228 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" podStartSLOduration=121.882209361 podStartE2EDuration="2m1.882209361s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:57.881141197 +0000 UTC m=+141.911112198" watchObservedRunningTime="2026-01-24 00:05:57.882209361 +0000 UTC m=+141.912180362" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.886619 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" event={"ID":"3651bf5e-f692-42b1-8d5e-512daae90cc8","Type":"ContainerStarted","Data":"77ede6dcf71f40244a89448f3c333530ae3535f093766fe2f7cada2a7b8f1805"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.892711 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.894037 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.394019679 +0000 UTC m=+142.423990680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.908961 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" event={"ID":"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106","Type":"ContainerStarted","Data":"7b39dfb3fd225ef1979265fdd2e45cfe4514080b41db9b075f6fa61cedd4f6e2"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.968334 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" event={"ID":"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1","Type":"ContainerStarted","Data":"48552c7ac7205c236a824947a2c6d70c82f2be4c38380e4a2ff3a395cef9bd8b"} Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.969524 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.970475 4676 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r6r59 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.970521 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" podUID="fdb046fc-eba9-4f07-a1d1-2db71a2d46c1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Jan 24 00:05:57 crc kubenswrapper[4676]: I0124 00:05:57.993706 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:57 crc kubenswrapper[4676]: E0124 00:05:57.994943 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.49491469 +0000 UTC m=+142.524885691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.016313 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" event={"ID":"ae793283-426a-4e02-b96b-89c3f16f2d16","Type":"ContainerStarted","Data":"278e06bc52d2c5dd807374de718b0b08159c45c4ffe1444c4ceca51bb82969ec"} Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.017772 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" event={"ID":"c50241f8-d135-4df8-b047-e76fb28b8a3d","Type":"ContainerStarted","Data":"9b58dea0bbca8fdb9cff011940784b3be0d4f2e9f630c86d3a6b54ff117e955b"} Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.018547 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.019706 4676 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2cgk2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.019751 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" podUID="c50241f8-d135-4df8-b047-e76fb28b8a3d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.030771 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" event={"ID":"89435826-645d-48a2-aa3b-f5c42003dcbe","Type":"ContainerStarted","Data":"b436a79b5d8ef5e9fda6092960f14ec8555b8c360d001d483efb56a39a863e65"} Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.031749 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.058360 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" event={"ID":"977bd2bf-e652-4b16-b8fc-902d4a1d7860","Type":"ContainerStarted","Data":"336fa4865cfc985bea40eacba6d369a0e38190209e4cc96e7fbeda326c923f35"} Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.102739 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zc8ts" podStartSLOduration=123.102720771 podStartE2EDuration="2m3.102720771s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:58.01147255 +0000 UTC m=+142.041443541" watchObservedRunningTime="2026-01-24 00:05:58.102720771 +0000 UTC m=+142.132691772" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.105072 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.106200 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.606186802 +0000 UTC m=+142.636157803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.127147 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-spx5f" event={"ID":"f5c752b5-1413-4c4e-aea6-5302a7c69467","Type":"ContainerStarted","Data":"8b99ff8354c09cbd893a20e89d11dda139e4d7edf9f2b97679080aaa79aa03cc"} Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.208291 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.216990 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.71697565 +0000 UTC m=+142.746946651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.263155 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6wjpz" podStartSLOduration=122.263130087 podStartE2EDuration="2m2.263130087s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:58.10455325 +0000 UTC m=+142.134524251" watchObservedRunningTime="2026-01-24 00:05:58.263130087 +0000 UTC m=+142.293101088" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.312626 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.313224 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.81320427 +0000 UTC m=+142.843175271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.410585 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" podStartSLOduration=123.410569607 podStartE2EDuration="2m3.410569607s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:58.26604921 +0000 UTC m=+142.296020211" watchObservedRunningTime="2026-01-24 00:05:58.410569607 +0000 UTC m=+142.440540608" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.411209 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" podStartSLOduration=122.411205008 podStartE2EDuration="2m2.411205008s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:58.409894676 +0000 UTC m=+142.439865677" watchObservedRunningTime="2026-01-24 00:05:58.411205008 +0000 UTC m=+142.441176009" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.414061 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.414428 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:58.914415731 +0000 UTC m=+142.944386732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.514630 4676 csr.go:261] certificate signing request csr-gztrj is approved, waiting to be issued Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.515629 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.516012 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.015997493 +0000 UTC m=+143.045968484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.524697 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:05:58 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:05:58 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:05:58 crc kubenswrapper[4676]: healthz check failed Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.524760 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.547838 4676 csr.go:257] certificate signing request csr-gztrj is issued Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.556006 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" podStartSLOduration=122.555980864 podStartE2EDuration="2m2.555980864s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:58.471256131 +0000 UTC m=+142.501227132" watchObservedRunningTime="2026-01-24 00:05:58.555980864 +0000 UTC m=+142.585951865" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.617648 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.617983 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.117971848 +0000 UTC m=+143.147942849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.652006 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4nnjp" podStartSLOduration=122.651992827 podStartE2EDuration="2m2.651992827s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:58.572977917 +0000 UTC m=+142.602948918" watchObservedRunningTime="2026-01-24 00:05:58.651992827 +0000 UTC m=+142.681963828" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.725489 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.725905 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.225889153 +0000 UTC m=+143.255860154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.827028 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.827792 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.327780195 +0000 UTC m=+143.357751196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.852624 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" podStartSLOduration=123.85260418 podStartE2EDuration="2m3.85260418s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:58.656657967 +0000 UTC m=+142.686628968" watchObservedRunningTime="2026-01-24 00:05:58.85260418 +0000 UTC m=+142.882575181" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.853191 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lxs47"] Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.854035 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.860636 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.872531 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lxs47"] Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.928981 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.929177 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-utilities\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.929205 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjhrt\" (UniqueName: \"kubernetes.io/projected/592859d8-1f7e-4e35-acc9-635e130ad2d2-kube-api-access-tjhrt\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:58 crc kubenswrapper[4676]: I0124 00:05:58.929229 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-catalog-content\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:58 crc kubenswrapper[4676]: E0124 00:05:58.929394 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.429361437 +0000 UTC m=+143.459332438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.030621 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjhrt\" (UniqueName: \"kubernetes.io/projected/592859d8-1f7e-4e35-acc9-635e130ad2d2-kube-api-access-tjhrt\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.030904 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-catalog-content\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.030956 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.031023 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-utilities\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.031365 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.531348703 +0000 UTC m=+143.561319704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.031396 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-utilities\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.031608 4676 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kvcv8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.031653 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" podUID="89435826-645d-48a2-aa3b-f5c42003dcbe" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.031770 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-catalog-content\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.062598 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxmrn"] Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.063444 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.070324 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.084181 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxmrn"] Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.109152 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjhrt\" (UniqueName: \"kubernetes.io/projected/592859d8-1f7e-4e35-acc9-635e130ad2d2-kube-api-access-tjhrt\") pod \"certified-operators-lxs47\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.131615 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.131809 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-utilities\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.131840 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqlz\" (UniqueName: \"kubernetes.io/projected/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-kube-api-access-hzqlz\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.131873 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-catalog-content\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.131997 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.631983975 +0000 UTC m=+143.661954976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.168742 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" event={"ID":"ae793283-426a-4e02-b96b-89c3f16f2d16","Type":"ContainerStarted","Data":"ee9da2b3e0f65e35ab572af64682e924c0f611a793e068bb8c18a0ee782bc896"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.173625 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.185235 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-wvvpg" podStartSLOduration=124.185222159 podStartE2EDuration="2m4.185222159s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.184125984 +0000 UTC m=+143.214096985" watchObservedRunningTime="2026-01-24 00:05:59.185222159 +0000 UTC m=+143.215193150" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.208077 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" event={"ID":"25db14a1-725f-42ae-a6e9-646546b584c7","Type":"ContainerStarted","Data":"f248b0348835be7d1147fcd695dd3066767299eda9ea4ffd9ff373a3f2b4259c"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.238650 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-utilities\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.238695 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzqlz\" (UniqueName: \"kubernetes.io/projected/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-kube-api-access-hzqlz\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.238731 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.238766 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-catalog-content\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.240692 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.740678485 +0000 UTC m=+143.770649486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.241199 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-utilities\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.244445 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-catalog-content\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.245060 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" event={"ID":"8a35921c-db91-45c2-a92c-a42f5b2cab84","Type":"ContainerStarted","Data":"d18eed4d040a1801f33a1a546fb0c31a050f213bcf9c3be8243646f6c7b8f091"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.246386 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-spx5f" event={"ID":"f5c752b5-1413-4c4e-aea6-5302a7c69467","Type":"ContainerStarted","Data":"e943b53f8d0728d98dcdd6d0078374e761cb1c0dae7bf0d94025557ea8b3bad7"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.262407 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" event={"ID":"e3f3c4fe-f6a3-4772-9551-3a0cf1bd5106","Type":"ContainerStarted","Data":"bf973003ab60349a19acd5d035e6d0d4442245f260794c5e37916a0c4c853df5"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.272701 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6kx9j" event={"ID":"4d53a745-f985-4659-b62c-ce297ce8ce85","Type":"ContainerStarted","Data":"ef308860d3684c8dcf0bcbf34ed1bc677d571ced579648c5ee47df26b4fe9bb3"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.285880 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wh9bw" podStartSLOduration=123.285859971 podStartE2EDuration="2m3.285859971s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.244298121 +0000 UTC m=+143.274269122" watchObservedRunningTime="2026-01-24 00:05:59.285859971 +0000 UTC m=+143.315830972" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.287562 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9lq6l"] Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.300139 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzqlz\" (UniqueName: \"kubernetes.io/projected/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-kube-api-access-hzqlz\") pod \"community-operators-rxmrn\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.300511 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.301113 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lq6l"] Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.312453 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" event={"ID":"a19dcc7f-c3e9-4aa8-90dc-412550a8060f","Type":"ContainerStarted","Data":"66ee604da142844888fae6dbb5d879f5a590502f3e8e6b2d15831781c77e7ece"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.312493 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" event={"ID":"a19dcc7f-c3e9-4aa8-90dc-412550a8060f","Type":"ContainerStarted","Data":"a90652cb8dbf7a8ff274bf2602ccbe67a8818f9961b942e19bab908916510b4f"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.322862 4676 generic.go:334] "Generic (PLEG): container finished" podID="1eabf562-d289-4685-8ee5-ed1525930d19" containerID="572cf18d919a142df3a87fe72606c221700742f64f3b5868d79f863d9965c925" exitCode=0 Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.323037 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" event={"ID":"1eabf562-d289-4685-8ee5-ed1525930d19","Type":"ContainerDied","Data":"572cf18d919a142df3a87fe72606c221700742f64f3b5868d79f863d9965c925"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.323646 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-spx5f" podStartSLOduration=10.32362349 podStartE2EDuration="10.32362349s" podCreationTimestamp="2026-01-24 00:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.298099613 +0000 UTC m=+143.328070614" watchObservedRunningTime="2026-01-24 00:05:59.32362349 +0000 UTC m=+143.353594481" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.341441 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.341635 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdl5n\" (UniqueName: \"kubernetes.io/projected/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-kube-api-access-jdl5n\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.341663 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-catalog-content\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.341746 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-utilities\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.345007 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.844988685 +0000 UTC m=+143.874959686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.385752 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" event={"ID":"fdb046fc-eba9-4f07-a1d1-2db71a2d46c1","Type":"ContainerStarted","Data":"6693a80f134e8c29e380a37d9d451ccb5bd9dbeb65aabe0df8490152f17f06ce"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.426120 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8ljj" podStartSLOduration=124.426101822 podStartE2EDuration="2m4.426101822s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.33268871 +0000 UTC m=+143.362659711" watchObservedRunningTime="2026-01-24 00:05:59.426101822 +0000 UTC m=+143.456072823" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.432010 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.445670 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdl5n\" (UniqueName: \"kubernetes.io/projected/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-kube-api-access-jdl5n\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.445731 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-catalog-content\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.445834 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.445928 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-utilities\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.454596 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-catalog-content\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.456146 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:05:59.956121943 +0000 UTC m=+143.986092944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.458089 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-utilities\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.489967 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdl5n\" (UniqueName: \"kubernetes.io/projected/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-kube-api-access-jdl5n\") pod \"certified-operators-9lq6l\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.490081 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pnqjd"] Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.545013 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.546571 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.546697 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-catalog-content\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.546759 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-utilities\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.546795 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sw6\" (UniqueName: \"kubernetes.io/projected/891b78f7-509c-4e8d-b846-52881396a64d-kube-api-access-49sw6\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.546890 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.046877159 +0000 UTC m=+144.076848160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.548959 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-24 00:00:58 +0000 UTC, rotation deadline is 2026-11-25 01:57:09.832515001 +0000 UTC Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.549012 4676 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7321h51m10.283506584s for next certificate rotation Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.555516 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hjbqw" podStartSLOduration=123.547176167 podStartE2EDuration="2m3.547176167s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.471291369 +0000 UTC m=+143.501262390" watchObservedRunningTime="2026-01-24 00:05:59.547176167 +0000 UTC m=+143.577147168" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.561801 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pnqjd"] Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.561852 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" event={"ID":"9d605c02-a15d-46a8-942c-cd85e6ce5452","Type":"ContainerStarted","Data":"b02350aa3570b328b0bcc70e7d67adfee447748aa0c39b31cbf01d8a91ea118e"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.566793 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:05:59 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:05:59 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:05:59 crc kubenswrapper[4676]: healthz check failed Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.566846 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.585073 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" event={"ID":"b189330a-ee63-45f1-8104-4ef173f8ee22","Type":"ContainerStarted","Data":"96d356ad0380f380a69eff524b4f1dc9c2ba310f7ae2f53d7cb02846df923de8"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.601041 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6m9lm" event={"ID":"4b605a66-7904-4596-a67f-ea21ef41a24b","Type":"ContainerStarted","Data":"3edd92bd702125f593d90a1fbce6f0f738ec4a56e64db7b21757f92df15eb7cf"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.601927 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6m9lm" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.603038 4676 patch_prober.go:28] interesting pod/downloads-7954f5f757-6m9lm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.603075 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6m9lm" podUID="4b605a66-7904-4596-a67f-ea21ef41a24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.604604 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w7sr4" event={"ID":"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1","Type":"ContainerStarted","Data":"8bf3837f26c124fb3dea1cb5af5423060dde2aad405ee85078a315680369e231"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.609351 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" event={"ID":"c2d89ff2-f919-459e-8089-5097aab0f4e2","Type":"ContainerStarted","Data":"58b1c728726962279da6279af62f496118daac9c9e348c61cab85e7d8d43f7c5"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.622966 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" event={"ID":"c50241f8-d135-4df8-b047-e76fb28b8a3d","Type":"ContainerStarted","Data":"e7ce28b323b8b8486c8ab8d191dde3eb75dc5db6d7b9860b5ba5c6573304aea7"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.625037 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nhmlj" podStartSLOduration=123.625022551 podStartE2EDuration="2m3.625022551s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.620069341 +0000 UTC m=+143.650040342" watchObservedRunningTime="2026-01-24 00:05:59.625022551 +0000 UTC m=+143.654993552" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.648034 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.648190 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-catalog-content\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.648290 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-utilities\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.648356 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49sw6\" (UniqueName: \"kubernetes.io/projected/891b78f7-509c-4e8d-b846-52881396a64d-kube-api-access-49sw6\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.649790 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.149772143 +0000 UTC m=+144.179743144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.653875 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-catalog-content\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.655784 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-utilities\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.659023 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" event={"ID":"6d52fce7-049e-441e-8e40-15d044e0319a","Type":"ContainerStarted","Data":"1d08010e79a9253ef911ab9df7ef942a3cfa181b9ed3fadcb6ae76c24d32cbaf"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.659060 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" event={"ID":"6d52fce7-049e-441e-8e40-15d044e0319a","Type":"ContainerStarted","Data":"51af42e11a2cb3b8c0b0d79b6aa83a9b73beada8a6246f775c3f887692bd6a5a"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.673927 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" event={"ID":"d8eed212-9137-45e5-8347-1f921fbedb19","Type":"ContainerStarted","Data":"b35684c3824b37433ff463a7c5ce25ec374136425ad48494c561190706b6df5d"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.688903 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" event={"ID":"2f5e70d9-8b16-4684-bd98-4287ccbb6d85","Type":"ContainerStarted","Data":"8460fe10f124ae6d86e08e1e7bb4e37f2b62c4cc301709f8e03e0a378432f4b2"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.705147 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2cgk2" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.719144 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.732244 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" podStartSLOduration=123.732205022 podStartE2EDuration="2m3.732205022s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.679113272 +0000 UTC m=+143.709084273" watchObservedRunningTime="2026-01-24 00:05:59.732205022 +0000 UTC m=+143.762176023" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.734769 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49sw6\" (UniqueName: \"kubernetes.io/projected/891b78f7-509c-4e8d-b846-52881396a64d-kube-api-access-49sw6\") pod \"community-operators-pnqjd\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.735499 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xt6kb" podStartSLOduration=124.735491087 podStartE2EDuration="2m4.735491087s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.731397696 +0000 UTC m=+143.761368697" watchObservedRunningTime="2026-01-24 00:05:59.735491087 +0000 UTC m=+143.765462088" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.743209 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" event={"ID":"f53afe6a-307c-4b0d-88cb-596703f35f8a","Type":"ContainerStarted","Data":"a2a46bead832ce59b31551b91c40dbbcccf807b31102af079db87e5f92879d21"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.754978 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.755160 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.255142146 +0000 UTC m=+144.285113147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.755759 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.756067 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.256055486 +0000 UTC m=+144.286026487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.775271 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ck624" podStartSLOduration=123.77525322 podStartE2EDuration="2m3.77525322s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.774461485 +0000 UTC m=+143.804432486" watchObservedRunningTime="2026-01-24 00:05:59.77525322 +0000 UTC m=+143.805224211" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.789783 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" event={"ID":"bf0662c1-a2ab-4bc6-8278-5c6cbb43cf2c","Type":"ContainerStarted","Data":"15c98b0b70967786e363fc8732cd1aebce0f294589ac539df0ca5c027ee5ec22"} Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.791141 4676 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rsw66 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.791200 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" podUID="7af03157-ee92-4e72-a775-acaeabb73e65" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.828894 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sxjwn" podStartSLOduration=123.819556598 podStartE2EDuration="2m3.819556598s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.818097692 +0000 UTC m=+143.848068693" watchObservedRunningTime="2026-01-24 00:05:59.819556598 +0000 UTC m=+143.849527600" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.860994 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6m9lm" podStartSLOduration=124.860967884 podStartE2EDuration="2m4.860967884s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.858823136 +0000 UTC m=+143.888794137" watchObservedRunningTime="2026-01-24 00:05:59.860967884 +0000 UTC m=+143.890938875" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.868987 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.870733 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.370709817 +0000 UTC m=+144.400680818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.886829 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.897320 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.397280817 +0000 UTC m=+144.427251818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.937522 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mkwj8" podStartSLOduration=123.937477704 podStartE2EDuration="2m3.937477704s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.885656525 +0000 UTC m=+143.915627526" watchObservedRunningTime="2026-01-24 00:05:59.937477704 +0000 UTC m=+143.967448705" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.946755 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:05:59 crc kubenswrapper[4676]: I0124 00:05:59.990141 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:05:59 crc kubenswrapper[4676]: E0124 00:05:59.990732 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.490707908 +0000 UTC m=+144.520678909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.060272 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n8shl" podStartSLOduration=124.060244354 podStartE2EDuration="2m4.060244354s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:06:00.054127168 +0000 UTC m=+144.084098169" watchObservedRunningTime="2026-01-24 00:06:00.060244354 +0000 UTC m=+144.090215355" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.061067 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-975pl" podStartSLOduration=124.061059841 podStartE2EDuration="2m4.061059841s" podCreationTimestamp="2026-01-24 00:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:05:59.977818486 +0000 UTC m=+144.007789487" watchObservedRunningTime="2026-01-24 00:06:00.061059841 +0000 UTC m=+144.091030842" Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.092552 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.592301321 +0000 UTC m=+144.622272322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.091837 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.142058 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zllh" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.194260 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.194810 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.694793352 +0000 UTC m=+144.724764353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.195115 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.195643 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.69563569 +0000 UTC m=+144.725606691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.297978 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.298397 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.798367429 +0000 UTC m=+144.828338430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.392450 4676 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r6r59 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.392532 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" podUID="fdb046fc-eba9-4f07-a1d1-2db71a2d46c1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.402766 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.403345 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:00.90332847 +0000 UTC m=+144.933299471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.508330 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.509059 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.009043194 +0000 UTC m=+145.039014195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.515672 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:00 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:00 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:00 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.515723 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.612092 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.612487 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.112475756 +0000 UTC m=+145.142446747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.714860 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.715332 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.215315038 +0000 UTC m=+145.245286029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.794503 4676 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kvcv8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.794583 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" podUID="89435826-645d-48a2-aa3b-f5c42003dcbe" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.814741 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w7sr4" event={"ID":"ce7f5025-ebd8-4cbf-af5a-460fbeb681d1","Type":"ContainerStarted","Data":"6711c3613b3b509231fd33b0dc5c3e7bb7bcceb4c5432073d4be1891e4207a35"} Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.814829 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-w7sr4" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.819085 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.819425 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.319413751 +0000 UTC m=+145.349384752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.822837 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-64phz" event={"ID":"290fd523-8c24-458a-8abb-e32ca43caae1","Type":"ContainerStarted","Data":"6bf62bc4a416ff0bd0b8789a0d98c2c4fda57ab1304842586447edb37f969335"} Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.834228 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" event={"ID":"8a35921c-db91-45c2-a92c-a42f5b2cab84","Type":"ContainerStarted","Data":"96bf3915a948d562927938299e55bac7c30f343e96c87a15a7fa3a3f486b1edb"} Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.834617 4676 patch_prober.go:28] interesting pod/downloads-7954f5f757-6m9lm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.834649 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6m9lm" podUID="4b605a66-7904-4596-a67f-ea21ef41a24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.839253 4676 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rsw66 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.839282 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" podUID="7af03157-ee92-4e72-a775-acaeabb73e65" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.924600 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:00 crc kubenswrapper[4676]: E0124 00:06:00.928040 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.428022128 +0000 UTC m=+145.457993129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.984171 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-w7sr4" podStartSLOduration=10.984152065 podStartE2EDuration="10.984152065s" podCreationTimestamp="2026-01-24 00:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:06:00.927605235 +0000 UTC m=+144.957576236" watchObservedRunningTime="2026-01-24 00:06:00.984152065 +0000 UTC m=+145.014123066" Jan 24 00:06:00 crc kubenswrapper[4676]: I0124 00:06:00.985128 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cdrbc" podStartSLOduration=125.985122817 podStartE2EDuration="2m5.985122817s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:06:00.983757882 +0000 UTC m=+145.013728883" watchObservedRunningTime="2026-01-24 00:06:00.985122817 +0000 UTC m=+145.015093818" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.029432 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.037935 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.537913447 +0000 UTC m=+145.567884448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.051149 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lxs47"] Jan 24 00:06:01 crc kubenswrapper[4676]: W0124 00:06:01.069077 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod592859d8_1f7e_4e35_acc9_635e130ad2d2.slice/crio-13ab7acc7e8b9f7cc9283df9a7fe0c7c12a31d5e17ec678008c5409224450c7c WatchSource:0}: Error finding container 13ab7acc7e8b9f7cc9283df9a7fe0c7c12a31d5e17ec678008c5409224450c7c: Status 404 returned error can't find the container with id 13ab7acc7e8b9f7cc9283df9a7fe0c7c12a31d5e17ec678008c5409224450c7c Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.100876 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l7nsm"] Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.101967 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.109969 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.130787 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.131205 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.631186663 +0000 UTC m=+145.661157664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.163422 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7nsm"] Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.234061 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.234103 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-utilities\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.234150 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-catalog-content\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.234172 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4p5l\" (UniqueName: \"kubernetes.io/projected/920c325a-f36b-4162-9d37-ea88124be938-kube-api-access-n4p5l\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.234462 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.73445153 +0000 UTC m=+145.764422531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.318774 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxmrn"] Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.335874 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.336010 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-catalog-content\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.336031 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4p5l\" (UniqueName: \"kubernetes.io/projected/920c325a-f36b-4162-9d37-ea88124be938-kube-api-access-n4p5l\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.336116 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-utilities\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.336871 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-utilities\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.336932 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.8369187 +0000 UTC m=+145.866889701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.337121 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-catalog-content\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.448143 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.448718 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:01.948703729 +0000 UTC m=+145.978674730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.449616 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4p5l\" (UniqueName: \"kubernetes.io/projected/920c325a-f36b-4162-9d37-ea88124be938-kube-api-access-n4p5l\") pod \"redhat-marketplace-l7nsm\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.454676 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-94fls"] Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.462890 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.486661 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94fls"] Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.505003 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.512763 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:01 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:01 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:01 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.512805 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.550854 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.551116 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5djr\" (UniqueName: \"kubernetes.io/projected/236fa0ff-93a4-429d-9a2a-b1ae84167818-kube-api-access-g5djr\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.551251 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-utilities\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.551370 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.051271813 +0000 UTC m=+146.081242804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.551587 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-catalog-content\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.551715 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.552254 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.052245774 +0000 UTC m=+146.082216775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.654805 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.655221 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5djr\" (UniqueName: \"kubernetes.io/projected/236fa0ff-93a4-429d-9a2a-b1ae84167818-kube-api-access-g5djr\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.655409 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-utilities\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.655564 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-catalog-content\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.656085 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-catalog-content\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.656250 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.156231853 +0000 UTC m=+146.186202854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.657057 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-utilities\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.704742 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lq6l"] Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.707232 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5djr\" (UniqueName: \"kubernetes.io/projected/236fa0ff-93a4-429d-9a2a-b1ae84167818-kube-api-access-g5djr\") pod \"redhat-marketplace-94fls\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.756956 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.757289 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.257277368 +0000 UTC m=+146.287248369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.779422 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pnqjd"] Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.786713 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.834669 4676 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kvcv8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.834809 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" podUID="89435826-645d-48a2-aa3b-f5c42003dcbe" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.838216 4676 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r6r59 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.838267 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" podUID="fdb046fc-eba9-4f07-a1d1-2db71a2d46c1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.858848 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.859027 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.359011476 +0000 UTC m=+146.388982477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.859210 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.859550 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.359530822 +0000 UTC m=+146.389501823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.861058 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lq6l" event={"ID":"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59","Type":"ContainerStarted","Data":"240a15f7e6730a7c6ad0337131f8a5c437c590dd34ae29a62a731e625e9f6839"} Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.864169 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmrn" event={"ID":"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd","Type":"ContainerStarted","Data":"da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060"} Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.864201 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmrn" event={"ID":"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd","Type":"ContainerStarted","Data":"e7b1f0797950f02a9f5633d0b1bb88fc987a1236bb934934be7f6364e391819f"} Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.868797 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.868827 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.900321 4676 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2scbc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]log ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]etcd ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/generic-apiserver-start-informers ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/max-in-flight-filter ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 24 00:06:01 crc kubenswrapper[4676]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/project.openshift.io-projectcache ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/openshift.io-startinformers ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 24 00:06:01 crc kubenswrapper[4676]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 24 00:06:01 crc kubenswrapper[4676]: livez check failed Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.900407 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" podUID="2f4cf2ff-5d1d-4d40-b7ca-02d8d681e70e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.901716 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" event={"ID":"1eabf562-d289-4685-8ee5-ed1525930d19","Type":"ContainerDied","Data":"c2eca52027c2d29df7bf8bb00c816d40e324cb64a398fb91764ca8dab5cbaa40"} Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.901755 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2eca52027c2d29df7bf8bb00c816d40e324cb64a398fb91764ca8dab5cbaa40" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.901836 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.902171 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.939312 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxs47" event={"ID":"592859d8-1f7e-4e35-acc9-635e130ad2d2","Type":"ContainerStarted","Data":"5f55960a6c743513af98906e24b7fac1e853e9cbe06b95180d6ac6353a01cca4"} Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.939359 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxs47" event={"ID":"592859d8-1f7e-4e35-acc9-635e130ad2d2","Type":"ContainerStarted","Data":"13ab7acc7e8b9f7cc9283df9a7fe0c7c12a31d5e17ec678008c5409224450c7c"} Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.945255 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-64phz" event={"ID":"290fd523-8c24-458a-8abb-e32ca43caae1","Type":"ContainerStarted","Data":"2f417b8a42114b9e1c7520500e311fcb7c0e793e5961df975148a4209f2eb997"} Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.945855 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.960241 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eabf562-d289-4685-8ee5-ed1525930d19-secret-volume\") pod \"1eabf562-d289-4685-8ee5-ed1525930d19\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.960468 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.960525 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume\") pod \"1eabf562-d289-4685-8ee5-ed1525930d19\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.960582 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvj8q\" (UniqueName: \"kubernetes.io/projected/1eabf562-d289-4685-8ee5-ed1525930d19-kube-api-access-gvj8q\") pod \"1eabf562-d289-4685-8ee5-ed1525930d19\" (UID: \"1eabf562-d289-4685-8ee5-ed1525930d19\") " Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.964139 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqjd" event={"ID":"891b78f7-509c-4e8d-b846-52881396a64d","Type":"ContainerStarted","Data":"fe1008d35989eb8d8f31d34ea59eda3fc8a16f8472a8718f92a963e88a25a092"} Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.965529 4676 patch_prober.go:28] interesting pod/downloads-7954f5f757-6m9lm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.965578 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6m9lm" podUID="4b605a66-7904-4596-a67f-ea21ef41a24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 00:06:01 crc kubenswrapper[4676]: E0124 00:06:01.966038 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.466016681 +0000 UTC m=+146.495987692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.969953 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume" (OuterVolumeSpecName: "config-volume") pod "1eabf562-d289-4685-8ee5-ed1525930d19" (UID: "1eabf562-d289-4685-8ee5-ed1525930d19"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.970318 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-x649d" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.974723 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eabf562-d289-4685-8ee5-ed1525930d19-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1eabf562-d289-4685-8ee5-ed1525930d19" (UID: "1eabf562-d289-4685-8ee5-ed1525930d19"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:06:01 crc kubenswrapper[4676]: I0124 00:06:01.992837 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eabf562-d289-4685-8ee5-ed1525930d19-kube-api-access-gvj8q" (OuterVolumeSpecName: "kube-api-access-gvj8q") pod "1eabf562-d289-4685-8ee5-ed1525930d19" (UID: "1eabf562-d289-4685-8ee5-ed1525930d19"). InnerVolumeSpecName "kube-api-access-gvj8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.031139 4676 patch_prober.go:28] interesting pod/console-f9d7485db-g2smk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.031370 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2smk" podUID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.031133 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.031681 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.065439 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.065876 4676 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eabf562-d289-4685-8ee5-ed1525930d19-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.066007 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvj8q\" (UniqueName: \"kubernetes.io/projected/1eabf562-d289-4685-8ee5-ed1525930d19-kube-api-access-gvj8q\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.066116 4676 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eabf562-d289-4685-8ee5-ed1525930d19-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.067285 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.567264883 +0000 UTC m=+146.597235884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.094524 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.094743 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.115795 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-645jr"] Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.116912 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eabf562-d289-4685-8ee5-ed1525930d19" containerName="collect-profiles" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.117102 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eabf562-d289-4685-8ee5-ed1525930d19" containerName="collect-profiles" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.117283 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eabf562-d289-4685-8ee5-ed1525930d19" containerName="collect-profiles" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.118160 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.118354 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.121636 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-645jr"] Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.122848 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.166962 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.167210 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.667190133 +0000 UTC m=+146.697161134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.270312 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-catalog-content\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.270364 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5kz\" (UniqueName: \"kubernetes.io/projected/10798203-b391-4a87-98a7-b41db2bbb0e2-kube-api-access-8g5kz\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.270424 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.270451 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-utilities\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.278931 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.77890733 +0000 UTC m=+146.808878331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.415420 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.416151 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-catalog-content\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.416193 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5kz\" (UniqueName: \"kubernetes.io/projected/10798203-b391-4a87-98a7-b41db2bbb0e2-kube-api-access-8g5kz\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.416238 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-utilities\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.416765 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-utilities\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.416860 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-catalog-content\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.416983 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:02.916961379 +0000 UTC m=+146.946932370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.427170 4676 patch_prober.go:28] interesting pod/downloads-7954f5f757-6m9lm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.427177 4676 patch_prober.go:28] interesting pod/downloads-7954f5f757-6m9lm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.427222 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6m9lm" podUID="4b605a66-7904-4596-a67f-ea21ef41a24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.427270 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6m9lm" podUID="4b605a66-7904-4596-a67f-ea21ef41a24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.466114 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sv8x4"] Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.480396 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.497895 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv8x4"] Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.498915 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5kz\" (UniqueName: \"kubernetes.io/projected/10798203-b391-4a87-98a7-b41db2bbb0e2-kube-api-access-8g5kz\") pod \"redhat-operators-645jr\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.503661 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.511762 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:02 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:02 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:02 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.512056 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.520067 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.521280 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-catalog-content\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.521497 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6gg\" (UniqueName: \"kubernetes.io/projected/2247b246-d06f-4211-ab24-ff0ee05953b9-kube-api-access-9n6gg\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.521587 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-utilities\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.533867 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.021961551 +0000 UTC m=+147.051932552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.627224 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.627561 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6gg\" (UniqueName: \"kubernetes.io/projected/2247b246-d06f-4211-ab24-ff0ee05953b9-kube-api-access-9n6gg\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.627591 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-utilities\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.627674 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-catalog-content\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.628900 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.128879115 +0000 UTC m=+147.158850116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.629922 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-utilities\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.630048 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-catalog-content\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.684077 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6gg\" (UniqueName: \"kubernetes.io/projected/2247b246-d06f-4211-ab24-ff0ee05953b9-kube-api-access-9n6gg\") pod \"redhat-operators-sv8x4\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.684681 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.730949 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.732960 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.232942907 +0000 UTC m=+147.262913908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.760950 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.833968 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.834387 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.334353563 +0000 UTC m=+147.364324564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.840961 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.926001 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7nsm"] Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.935202 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:02 crc kubenswrapper[4676]: E0124 00:06:02.935509 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.435496081 +0000 UTC m=+147.465467082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.944527 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r6r59" Jan 24 00:06:02 crc kubenswrapper[4676]: I0124 00:06:02.951291 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:02.999282 4676 generic.go:334] "Generic (PLEG): container finished" podID="891b78f7-509c-4e8d-b846-52881396a64d" containerID="5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0" exitCode=0 Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:02.999343 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqjd" event={"ID":"891b78f7-509c-4e8d-b846-52881396a64d","Type":"ContainerDied","Data":"5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0"} Jan 24 00:06:03 crc kubenswrapper[4676]: W0124 00:06:03.010367 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod920c325a_f36b_4162_9d37_ea88124be938.slice/crio-c30184174638423d11a5d0c24917e087282eea0233650b891df20e30923de797 WatchSource:0}: Error finding container c30184174638423d11a5d0c24917e087282eea0233650b891df20e30923de797: Status 404 returned error can't find the container with id c30184174638423d11a5d0c24917e087282eea0233650b891df20e30923de797 Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.029632 4676 generic.go:334] "Generic (PLEG): container finished" podID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerID="34fdffdd9a497f9a6cd5b90c76fdd621cc1104ee041982343a04412f81b7f452" exitCode=0 Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.029738 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lq6l" event={"ID":"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59","Type":"ContainerDied","Data":"34fdffdd9a497f9a6cd5b90c76fdd621cc1104ee041982343a04412f81b7f452"} Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.036126 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.037222 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.537206068 +0000 UTC m=+147.567177069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.065555 4676 generic.go:334] "Generic (PLEG): container finished" podID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerID="da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060" exitCode=0 Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.065627 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmrn" event={"ID":"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd","Type":"ContainerDied","Data":"da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060"} Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.086515 4676 generic.go:334] "Generic (PLEG): container finished" podID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerID="5f55960a6c743513af98906e24b7fac1e853e9cbe06b95180d6ac6353a01cca4" exitCode=0 Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.086591 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxs47" event={"ID":"592859d8-1f7e-4e35-acc9-635e130ad2d2","Type":"ContainerDied","Data":"5f55960a6c743513af98906e24b7fac1e853e9cbe06b95180d6ac6353a01cca4"} Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.121125 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-64phz" event={"ID":"290fd523-8c24-458a-8abb-e32ca43caae1","Type":"ContainerStarted","Data":"5976a48db83ede27ce3462f28b7f1eb3e783f41c258f36eaff6c1057e7c7ed6e"} Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.138275 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.138560 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.638549493 +0000 UTC m=+147.668520484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.151484 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7d9xm" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.154667 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94fls"] Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.240800 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.241649 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.741635373 +0000 UTC m=+147.771606374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.349441 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.349509 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.349535 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.349589 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.349621 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.350729 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.351603 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.851583154 +0000 UTC m=+147.881554155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.351994 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.355823 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.361607 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.367237 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.369139 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.373329 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.393419 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.415557 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.455412 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.456172 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53167e0-92bc-428e-909f-791982828360-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e53167e0-92bc-428e-909f-791982828360\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.456245 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53167e0-92bc-428e-909f-791982828360-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e53167e0-92bc-428e-909f-791982828360\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.456362 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:03.956343307 +0000 UTC m=+147.986314308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.473784 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.487101 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.541842 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:03 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:03 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:03 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.541895 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.557109 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53167e0-92bc-428e-909f-791982828360-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e53167e0-92bc-428e-909f-791982828360\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.557174 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.557197 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53167e0-92bc-428e-909f-791982828360-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e53167e0-92bc-428e-909f-791982828360\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.557265 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53167e0-92bc-428e-909f-791982828360-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e53167e0-92bc-428e-909f-791982828360\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.568184 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.068169447 +0000 UTC m=+148.098140448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.586644 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.658176 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.658391 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.158342215 +0000 UTC m=+148.188313216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.658564 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.658930 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.158922543 +0000 UTC m=+148.188893544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.727551 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53167e0-92bc-428e-909f-791982828360-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e53167e0-92bc-428e-909f-791982828360\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.759703 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.760125 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.260111313 +0000 UTC m=+148.290082314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.767338 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.868015 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.868357 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.368344798 +0000 UTC m=+148.398315789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.972721 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:03 crc kubenswrapper[4676]: E0124 00:06:03.976763 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.476728918 +0000 UTC m=+148.506699909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:03 crc kubenswrapper[4676]: I0124 00:06:03.987459 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv8x4"] Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.040068 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-645jr"] Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.079224 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:04 crc kubenswrapper[4676]: E0124 00:06:04.080031 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.580015415 +0000 UTC m=+148.609986416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.128855 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8x4" event={"ID":"2247b246-d06f-4211-ab24-ff0ee05953b9","Type":"ContainerStarted","Data":"219fafaa737e407cadd362820da7774d28e18cc7cb1ebb7f3b2b92c72a3323f6"} Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.137107 4676 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.156650 4676 generic.go:334] "Generic (PLEG): container finished" podID="920c325a-f36b-4162-9d37-ea88124be938" containerID="1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3" exitCode=0 Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.156730 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7nsm" event={"ID":"920c325a-f36b-4162-9d37-ea88124be938","Type":"ContainerDied","Data":"1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3"} Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.156757 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7nsm" event={"ID":"920c325a-f36b-4162-9d37-ea88124be938","Type":"ContainerStarted","Data":"c30184174638423d11a5d0c24917e087282eea0233650b891df20e30923de797"} Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.173752 4676 generic.go:334] "Generic (PLEG): container finished" podID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerID="42507e3dd923f3d05f90ce7ba013d39b4e6323bf6515e461e6d463d4272af52b" exitCode=0 Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.173819 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94fls" event={"ID":"236fa0ff-93a4-429d-9a2a-b1ae84167818","Type":"ContainerDied","Data":"42507e3dd923f3d05f90ce7ba013d39b4e6323bf6515e461e6d463d4272af52b"} Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.173846 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94fls" event={"ID":"236fa0ff-93a4-429d-9a2a-b1ae84167818","Type":"ContainerStarted","Data":"0a7c742121571c7169e4b719618bc512b51f0e77919d08a6e5c83dac8827150c"} Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.182671 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.182794 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-64phz" event={"ID":"290fd523-8c24-458a-8abb-e32ca43caae1","Type":"ContainerStarted","Data":"fe80f86e3558468a8feac4e74dd88fd4b9b17729b357f9881bfebe8dfccbc57b"} Jan 24 00:06:04 crc kubenswrapper[4676]: E0124 00:06:04.183107 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.683089906 +0000 UTC m=+148.713060907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.187335 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645jr" event={"ID":"10798203-b391-4a87-98a7-b41db2bbb0e2","Type":"ContainerStarted","Data":"bc7d200b5a89367114e48ce959705259b571653e291943569fe69329c0b533b4"} Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.220859 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-64phz" podStartSLOduration=14.220840364 podStartE2EDuration="14.220840364s" podCreationTimestamp="2026-01-24 00:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:06:04.219876983 +0000 UTC m=+148.249847984" watchObservedRunningTime="2026-01-24 00:06:04.220840364 +0000 UTC m=+148.250811365" Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.284045 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:04 crc kubenswrapper[4676]: E0124 00:06:04.285124 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.785107791 +0000 UTC m=+148.815078792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.385911 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:04 crc kubenswrapper[4676]: E0124 00:06:04.386159 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.886133446 +0000 UTC m=+148.916104437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.386222 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:04 crc kubenswrapper[4676]: E0124 00:06:04.387818 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.887806369 +0000 UTC m=+148.917777370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-shf2w" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:04 crc kubenswrapper[4676]: W0124 00:06:04.392972 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-e3471f8e23142752704577de4b403d882ad5cc2e8b6c5bf1587b44ba83fe9f37 WatchSource:0}: Error finding container e3471f8e23142752704577de4b403d882ad5cc2e8b6c5bf1587b44ba83fe9f37: Status 404 returned error can't find the container with id e3471f8e23142752704577de4b403d882ad5cc2e8b6c5bf1587b44ba83fe9f37 Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.442492 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.488500 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:04 crc kubenswrapper[4676]: E0124 00:06:04.488885 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 00:06:04.988858685 +0000 UTC m=+149.018829686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.506537 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:04 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:04 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:04 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.506592 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.556464 4676 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-24T00:06:04.137123834Z","Handler":null,"Name":""} Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.563783 4676 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.563819 4676 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.589937 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.593009 4676 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.593099 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.647093 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-shf2w\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.690840 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.701317 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 00:06:04 crc kubenswrapper[4676]: I0124 00:06:04.885104 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.232611 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ef7d083c65ea8b6e45bcdfacca6ed93018a0a7b0ce7593b2d127709724c935d3"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.232959 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e3471f8e23142752704577de4b403d882ad5cc2e8b6c5bf1587b44ba83fe9f37"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.236335 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"17ce290f16c8b20f4eb8341921bdccd3ab48faa94bae57acbcc314e00d5645d0"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.236395 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"25b7490cc0d4c59c8fed18314427fc7bd29a53f0fe5fd026e9986cf01bd36a01"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.236998 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.242310 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"01e5a5c78cb190c15b5595d924d7ab5c035f776b3eda219cd3cc18aec5fe639b"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.242342 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"09b6c0db316c15ca6eabfa1fff99cc04fb02575f7e12598ba07ac53d6dc089a6"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.266858 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e53167e0-92bc-428e-909f-791982828360","Type":"ContainerStarted","Data":"484b979f0f3ab8054c80988065d10b132af0af2875a09fa5f2608669bbdb9600"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.302857 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-shf2w"] Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.328663 4676 generic.go:334] "Generic (PLEG): container finished" podID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerID="eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0" exitCode=0 Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.328819 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645jr" event={"ID":"10798203-b391-4a87-98a7-b41db2bbb0e2","Type":"ContainerDied","Data":"eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.376870 4676 generic.go:334] "Generic (PLEG): container finished" podID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerID="07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf" exitCode=0 Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.377136 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8x4" event={"ID":"2247b246-d06f-4211-ab24-ff0ee05953b9","Type":"ContainerDied","Data":"07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf"} Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.427795 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.427777836 podStartE2EDuration="2.427777836s" podCreationTimestamp="2026-01-24 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:06:05.421774934 +0000 UTC m=+149.451745935" watchObservedRunningTime="2026-01-24 00:06:05.427777836 +0000 UTC m=+149.457748837" Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.511897 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:05 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:05 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:05 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:05 crc kubenswrapper[4676]: I0124 00:06:05.511946 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.056976 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.058044 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.060401 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.060578 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.069709 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.135227 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b42804-8738-47a8-8317-33db0f1c4c5b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7b42804-8738-47a8-8317-33db0f1c4c5b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.135357 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b42804-8738-47a8-8317-33db0f1c4c5b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7b42804-8738-47a8-8317-33db0f1c4c5b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.238006 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b42804-8738-47a8-8317-33db0f1c4c5b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7b42804-8738-47a8-8317-33db0f1c4c5b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.238203 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b42804-8738-47a8-8317-33db0f1c4c5b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7b42804-8738-47a8-8317-33db0f1c4c5b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.239726 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b42804-8738-47a8-8317-33db0f1c4c5b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7b42804-8738-47a8-8317-33db0f1c4c5b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.284916 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b42804-8738-47a8-8317-33db0f1c4c5b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7b42804-8738-47a8-8317-33db0f1c4c5b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.294102 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.384202 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.445680 4676 generic.go:334] "Generic (PLEG): container finished" podID="e53167e0-92bc-428e-909f-791982828360" containerID="4207fe9c76ca9f8d6b76117b495313275408d130693fe5dcdc925692278cee18" exitCode=0 Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.445955 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e53167e0-92bc-428e-909f-791982828360","Type":"ContainerDied","Data":"4207fe9c76ca9f8d6b76117b495313275408d130693fe5dcdc925692278cee18"} Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.483996 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" event={"ID":"9887557b-81eb-4651-8da2-fd34d7b0be97","Type":"ContainerStarted","Data":"38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472"} Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.484035 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" event={"ID":"9887557b-81eb-4651-8da2-fd34d7b0be97","Type":"ContainerStarted","Data":"927b1ecf53efd6bf7705450ed6ba3d508cc6d33832a7f941121086cc971d4adb"} Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.484066 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.515163 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:06 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:06 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:06 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.515236 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.542488 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" podStartSLOduration=131.542463675 podStartE2EDuration="2m11.542463675s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:06:06.542139484 +0000 UTC m=+150.572110485" watchObservedRunningTime="2026-01-24 00:06:06.542463675 +0000 UTC m=+150.572434676" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.877344 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:06:06 crc kubenswrapper[4676]: I0124 00:06:06.889178 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-2scbc" Jan 24 00:06:07 crc kubenswrapper[4676]: I0124 00:06:07.210744 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 00:06:07 crc kubenswrapper[4676]: I0124 00:06:07.503612 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d7b42804-8738-47a8-8317-33db0f1c4c5b","Type":"ContainerStarted","Data":"5c407d9f5e5c43a714bcc05894b43ec0b73bbd457e184c3f628e1f23c2cae2eb"} Jan 24 00:06:07 crc kubenswrapper[4676]: I0124 00:06:07.510297 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:07 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:07 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:07 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:07 crc kubenswrapper[4676]: I0124 00:06:07.510577 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:07 crc kubenswrapper[4676]: I0124 00:06:07.941863 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.007948 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53167e0-92bc-428e-909f-791982828360-kubelet-dir\") pod \"e53167e0-92bc-428e-909f-791982828360\" (UID: \"e53167e0-92bc-428e-909f-791982828360\") " Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.008061 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53167e0-92bc-428e-909f-791982828360-kube-api-access\") pod \"e53167e0-92bc-428e-909f-791982828360\" (UID: \"e53167e0-92bc-428e-909f-791982828360\") " Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.009450 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53167e0-92bc-428e-909f-791982828360-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e53167e0-92bc-428e-909f-791982828360" (UID: "e53167e0-92bc-428e-909f-791982828360"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.035620 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53167e0-92bc-428e-909f-791982828360-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e53167e0-92bc-428e-909f-791982828360" (UID: "e53167e0-92bc-428e-909f-791982828360"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.110007 4676 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53167e0-92bc-428e-909f-791982828360-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.110036 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53167e0-92bc-428e-909f-791982828360-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.514775 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:08 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:08 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:08 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.514831 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.549251 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.549571 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e53167e0-92bc-428e-909f-791982828360","Type":"ContainerDied","Data":"484b979f0f3ab8054c80988065d10b132af0af2875a09fa5f2608669bbdb9600"} Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.549588 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="484b979f0f3ab8054c80988065d10b132af0af2875a09fa5f2608669bbdb9600" Jan 24 00:06:08 crc kubenswrapper[4676]: I0124 00:06:08.553893 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-w7sr4" Jan 24 00:06:09 crc kubenswrapper[4676]: I0124 00:06:09.364801 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:06:09 crc kubenswrapper[4676]: I0124 00:06:09.365104 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:06:09 crc kubenswrapper[4676]: I0124 00:06:09.504493 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:09 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:09 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:09 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:09 crc kubenswrapper[4676]: I0124 00:06:09.504547 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:09 crc kubenswrapper[4676]: I0124 00:06:09.601573 4676 generic.go:334] "Generic (PLEG): container finished" podID="d7b42804-8738-47a8-8317-33db0f1c4c5b" containerID="a583295ccb19455325e1ed08ce62db1ebcd517770407d19b13bdbe4a47114fca" exitCode=0 Jan 24 00:06:09 crc kubenswrapper[4676]: I0124 00:06:09.601621 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d7b42804-8738-47a8-8317-33db0f1c4c5b","Type":"ContainerDied","Data":"a583295ccb19455325e1ed08ce62db1ebcd517770407d19b13bdbe4a47114fca"} Jan 24 00:06:10 crc kubenswrapper[4676]: I0124 00:06:10.503928 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:10 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:10 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:10 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:10 crc kubenswrapper[4676]: I0124 00:06:10.504217 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.503139 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:11 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:11 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:11 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.503183 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.582556 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.662852 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d7b42804-8738-47a8-8317-33db0f1c4c5b","Type":"ContainerDied","Data":"5c407d9f5e5c43a714bcc05894b43ec0b73bbd457e184c3f628e1f23c2cae2eb"} Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.662891 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c407d9f5e5c43a714bcc05894b43ec0b73bbd457e184c3f628e1f23c2cae2eb" Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.663159 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.691451 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b42804-8738-47a8-8317-33db0f1c4c5b-kube-api-access\") pod \"d7b42804-8738-47a8-8317-33db0f1c4c5b\" (UID: \"d7b42804-8738-47a8-8317-33db0f1c4c5b\") " Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.691508 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b42804-8738-47a8-8317-33db0f1c4c5b-kubelet-dir\") pod \"d7b42804-8738-47a8-8317-33db0f1c4c5b\" (UID: \"d7b42804-8738-47a8-8317-33db0f1c4c5b\") " Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.691784 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7b42804-8738-47a8-8317-33db0f1c4c5b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7b42804-8738-47a8-8317-33db0f1c4c5b" (UID: "d7b42804-8738-47a8-8317-33db0f1c4c5b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.714331 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b42804-8738-47a8-8317-33db0f1c4c5b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7b42804-8738-47a8-8317-33db0f1c4c5b" (UID: "d7b42804-8738-47a8-8317-33db0f1c4c5b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.792919 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b42804-8738-47a8-8317-33db0f1c4c5b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:11 crc kubenswrapper[4676]: I0124 00:06:11.792953 4676 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b42804-8738-47a8-8317-33db0f1c4c5b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:12 crc kubenswrapper[4676]: I0124 00:06:12.030091 4676 patch_prober.go:28] interesting pod/console-f9d7485db-g2smk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 24 00:06:12 crc kubenswrapper[4676]: I0124 00:06:12.030141 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2smk" podUID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 24 00:06:12 crc kubenswrapper[4676]: I0124 00:06:12.445876 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6m9lm" Jan 24 00:06:12 crc kubenswrapper[4676]: I0124 00:06:12.505316 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:12 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:12 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:12 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:12 crc kubenswrapper[4676]: I0124 00:06:12.505384 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:13 crc kubenswrapper[4676]: I0124 00:06:13.506051 4676 patch_prober.go:28] interesting pod/router-default-5444994796-zq745 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 00:06:13 crc kubenswrapper[4676]: [-]has-synced failed: reason withheld Jan 24 00:06:13 crc kubenswrapper[4676]: [+]process-running ok Jan 24 00:06:13 crc kubenswrapper[4676]: healthz check failed Jan 24 00:06:13 crc kubenswrapper[4676]: I0124 00:06:13.506179 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zq745" podUID="7b8f105b-569d-47f2-b564-a0830b010e31" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 00:06:14 crc kubenswrapper[4676]: I0124 00:06:14.505770 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:06:14 crc kubenswrapper[4676]: I0124 00:06:14.508521 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zq745" Jan 24 00:06:18 crc kubenswrapper[4676]: I0124 00:06:18.514158 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:06:18 crc kubenswrapper[4676]: I0124 00:06:18.533025 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18335446-e572-4741-ad9e-e7aadee7550b-metrics-certs\") pod \"network-metrics-daemon-r4q22\" (UID: \"18335446-e572-4741-ad9e-e7aadee7550b\") " pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:06:18 crc kubenswrapper[4676]: I0124 00:06:18.779620 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r4q22" Jan 24 00:06:22 crc kubenswrapper[4676]: I0124 00:06:22.035360 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:06:22 crc kubenswrapper[4676]: I0124 00:06:22.042036 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:06:24 crc kubenswrapper[4676]: I0124 00:06:24.891873 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:06:27 crc kubenswrapper[4676]: I0124 00:06:27.890777 4676 generic.go:334] "Generic (PLEG): container finished" podID="00beace7-1e83-40ed-8d92-6da0cae7817f" containerID="e742b4dbf395eb8d16eb17141075162ea559f552eabe8302d3372acaca4a7557" exitCode=0 Jan 24 00:06:27 crc kubenswrapper[4676]: I0124 00:06:27.890918 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29486880-rhzd2" event={"ID":"00beace7-1e83-40ed-8d92-6da0cae7817f","Type":"ContainerDied","Data":"e742b4dbf395eb8d16eb17141075162ea559f552eabe8302d3372acaca4a7557"} Jan 24 00:06:32 crc kubenswrapper[4676]: I0124 00:06:32.312194 4676 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod1eabf562-d289-4685-8ee5-ed1525930d19"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1eabf562-d289-4685-8ee5-ed1525930d19] : Timed out while waiting for systemd to remove kubepods-burstable-pod1eabf562_d289_4685_8ee5_ed1525930d19.slice" Jan 24 00:06:32 crc kubenswrapper[4676]: E0124 00:06:32.314078 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod1eabf562-d289-4685-8ee5-ed1525930d19] : unable to destroy cgroup paths for cgroup [kubepods burstable pod1eabf562-d289-4685-8ee5-ed1525930d19] : Timed out while waiting for systemd to remove kubepods-burstable-pod1eabf562_d289_4685_8ee5_ed1525930d19.slice" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" podUID="1eabf562-d289-4685-8ee5-ed1525930d19" Jan 24 00:06:32 crc kubenswrapper[4676]: I0124 00:06:32.498549 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ljbnb" Jan 24 00:06:32 crc kubenswrapper[4676]: I0124 00:06:32.936937 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg" Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.364229 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.364631 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.468616 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.512498 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/00beace7-1e83-40ed-8d92-6da0cae7817f-serviceca\") pod \"00beace7-1e83-40ed-8d92-6da0cae7817f\" (UID: \"00beace7-1e83-40ed-8d92-6da0cae7817f\") " Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.512601 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qp87\" (UniqueName: \"kubernetes.io/projected/00beace7-1e83-40ed-8d92-6da0cae7817f-kube-api-access-4qp87\") pod \"00beace7-1e83-40ed-8d92-6da0cae7817f\" (UID: \"00beace7-1e83-40ed-8d92-6da0cae7817f\") " Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.513136 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00beace7-1e83-40ed-8d92-6da0cae7817f-serviceca" (OuterVolumeSpecName: "serviceca") pod "00beace7-1e83-40ed-8d92-6da0cae7817f" (UID: "00beace7-1e83-40ed-8d92-6da0cae7817f"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.524033 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00beace7-1e83-40ed-8d92-6da0cae7817f-kube-api-access-4qp87" (OuterVolumeSpecName: "kube-api-access-4qp87") pod "00beace7-1e83-40ed-8d92-6da0cae7817f" (UID: "00beace7-1e83-40ed-8d92-6da0cae7817f"). InnerVolumeSpecName "kube-api-access-4qp87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.614851 4676 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/00beace7-1e83-40ed-8d92-6da0cae7817f-serviceca\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.614903 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qp87\" (UniqueName: \"kubernetes.io/projected/00beace7-1e83-40ed-8d92-6da0cae7817f-kube-api-access-4qp87\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.979636 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29486880-rhzd2" event={"ID":"00beace7-1e83-40ed-8d92-6da0cae7817f","Type":"ContainerDied","Data":"79a71111cca31e12c0d06b9a6d103743df145b9a3918dbe29c65faddc57798a0"} Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.979705 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79a71111cca31e12c0d06b9a6d103743df145b9a3918dbe29c65faddc57798a0" Jan 24 00:06:39 crc kubenswrapper[4676]: I0124 00:06:39.979772 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29486880-rhzd2" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.736322 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 00:06:42 crc kubenswrapper[4676]: E0124 00:06:42.736922 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b42804-8738-47a8-8317-33db0f1c4c5b" containerName="pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.736934 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b42804-8738-47a8-8317-33db0f1c4c5b" containerName="pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: E0124 00:06:42.736953 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53167e0-92bc-428e-909f-791982828360" containerName="pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.736959 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53167e0-92bc-428e-909f-791982828360" containerName="pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: E0124 00:06:42.736968 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00beace7-1e83-40ed-8d92-6da0cae7817f" containerName="image-pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.736974 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="00beace7-1e83-40ed-8d92-6da0cae7817f" containerName="image-pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.737063 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b42804-8738-47a8-8317-33db0f1c4c5b" containerName="pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.737073 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="00beace7-1e83-40ed-8d92-6da0cae7817f" containerName="image-pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.737084 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53167e0-92bc-428e-909f-791982828360" containerName="pruner" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.737497 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.742562 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.742791 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.754438 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b267c5b-c180-4b96-9223-65ed17bd8546-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b267c5b-c180-4b96-9223-65ed17bd8546\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.754495 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b267c5b-c180-4b96-9223-65ed17bd8546-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b267c5b-c180-4b96-9223-65ed17bd8546\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.755880 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.855650 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b267c5b-c180-4b96-9223-65ed17bd8546-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b267c5b-c180-4b96-9223-65ed17bd8546\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.855742 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b267c5b-c180-4b96-9223-65ed17bd8546-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b267c5b-c180-4b96-9223-65ed17bd8546\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.855845 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b267c5b-c180-4b96-9223-65ed17bd8546-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b267c5b-c180-4b96-9223-65ed17bd8546\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:42 crc kubenswrapper[4676]: I0124 00:06:42.892907 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b267c5b-c180-4b96-9223-65ed17bd8546-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b267c5b-c180-4b96-9223-65ed17bd8546\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:43 crc kubenswrapper[4676]: I0124 00:06:43.067516 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:43 crc kubenswrapper[4676]: I0124 00:06:43.491231 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 00:06:47 crc kubenswrapper[4676]: E0124 00:06:47.993834 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 24 00:06:47 crc kubenswrapper[4676]: E0124 00:06:47.995194 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g5kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-645jr_openshift-marketplace(10798203-b391-4a87-98a7-b41db2bbb0e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 00:06:47 crc kubenswrapper[4676]: E0124 00:06:47.997305 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-645jr" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.147479 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.149335 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.153920 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.250575 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-var-lock\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.251172 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.251205 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c280bb7c-e926-46c9-bb2c-d88c64634673-kube-api-access\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.352706 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-var-lock\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.352783 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.352809 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c280bb7c-e926-46c9-bb2c-d88c64634673-kube-api-access\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.353319 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-var-lock\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.353370 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.385831 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c280bb7c-e926-46c9-bb2c-d88c64634673-kube-api-access\") pod \"installer-9-crc\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:48 crc kubenswrapper[4676]: I0124 00:06:48.496655 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:06:49 crc kubenswrapper[4676]: E0124 00:06:49.738940 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-645jr" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" Jan 24 00:06:49 crc kubenswrapper[4676]: E0124 00:06:49.800422 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 24 00:06:49 crc kubenswrapper[4676]: E0124 00:06:49.800592 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4p5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-l7nsm_openshift-marketplace(920c325a-f36b-4162-9d37-ea88124be938): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 00:06:49 crc kubenswrapper[4676]: E0124 00:06:49.801858 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-l7nsm" podUID="920c325a-f36b-4162-9d37-ea88124be938" Jan 24 00:06:51 crc kubenswrapper[4676]: I0124 00:06:51.966804 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvcv8"] Jan 24 00:06:52 crc kubenswrapper[4676]: E0124 00:06:52.086408 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-l7nsm" podUID="920c325a-f36b-4162-9d37-ea88124be938" Jan 24 00:06:52 crc kubenswrapper[4676]: E0124 00:06:52.400432 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 00:06:52 crc kubenswrapper[4676]: E0124 00:06:52.400570 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdl5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9lq6l_openshift-marketplace(53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 00:06:52 crc kubenswrapper[4676]: E0124 00:06:52.401694 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9lq6l" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" Jan 24 00:06:53 crc kubenswrapper[4676]: E0124 00:06:53.879782 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 00:06:53 crc kubenswrapper[4676]: E0124 00:06:53.879925 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjhrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lxs47_openshift-marketplace(592859d8-1f7e-4e35-acc9-635e130ad2d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 00:06:53 crc kubenswrapper[4676]: E0124 00:06:53.881351 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lxs47" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" Jan 24 00:06:54 crc kubenswrapper[4676]: E0124 00:06:54.260013 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lxs47" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" Jan 24 00:06:54 crc kubenswrapper[4676]: E0124 00:06:54.260013 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9lq6l" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" Jan 24 00:06:54 crc kubenswrapper[4676]: I0124 00:06:54.607591 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 00:06:54 crc kubenswrapper[4676]: I0124 00:06:54.702200 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 00:06:54 crc kubenswrapper[4676]: E0124 00:06:54.726642 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 24 00:06:54 crc kubenswrapper[4676]: E0124 00:06:54.726886 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n6gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sv8x4_openshift-marketplace(2247b246-d06f-4211-ab24-ff0ee05953b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 00:06:54 crc kubenswrapper[4676]: E0124 00:06:54.729191 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sv8x4" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" Jan 24 00:06:54 crc kubenswrapper[4676]: I0124 00:06:54.762060 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r4q22"] Jan 24 00:06:54 crc kubenswrapper[4676]: W0124 00:06:54.770188 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18335446_e572_4741_ad9e_e7aadee7550b.slice/crio-52d8453819d216d69f828d5a62928795685c48cac8a247f0f225c6413be7140d WatchSource:0}: Error finding container 52d8453819d216d69f828d5a62928795685c48cac8a247f0f225c6413be7140d: Status 404 returned error can't find the container with id 52d8453819d216d69f828d5a62928795685c48cac8a247f0f225c6413be7140d Jan 24 00:06:55 crc kubenswrapper[4676]: I0124 00:06:55.056753 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b267c5b-c180-4b96-9223-65ed17bd8546","Type":"ContainerStarted","Data":"988abc66366e3cc44883733bf2fcb7439323f03e26b375f0822d70001d98da5a"} Jan 24 00:06:55 crc kubenswrapper[4676]: I0124 00:06:55.057769 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c280bb7c-e926-46c9-bb2c-d88c64634673","Type":"ContainerStarted","Data":"4ecf5bf658471ce1fe90cd7189121058d1fec23a97202699de0ab189c854a988"} Jan 24 00:06:55 crc kubenswrapper[4676]: I0124 00:06:55.059132 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r4q22" event={"ID":"18335446-e572-4741-ad9e-e7aadee7550b","Type":"ContainerStarted","Data":"52d8453819d216d69f828d5a62928795685c48cac8a247f0f225c6413be7140d"} Jan 24 00:06:55 crc kubenswrapper[4676]: E0124 00:06:55.060921 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sv8x4" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" Jan 24 00:06:56 crc kubenswrapper[4676]: I0124 00:06:56.064302 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b267c5b-c180-4b96-9223-65ed17bd8546","Type":"ContainerStarted","Data":"5108dd50bdc2072552bf66414bb7978d357b797d9814d722f14cc8bcc9f79471"} Jan 24 00:06:56 crc kubenswrapper[4676]: E0124 00:06:56.965498 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 24 00:06:56 crc kubenswrapper[4676]: E0124 00:06:56.965803 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rxmrn_openshift-marketplace(0a4d7c63-cff0-4408-9cb6-450f3ebc53dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 00:06:56 crc kubenswrapper[4676]: E0124 00:06:56.967154 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rxmrn" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" Jan 24 00:06:57 crc kubenswrapper[4676]: I0124 00:06:57.070588 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r4q22" event={"ID":"18335446-e572-4741-ad9e-e7aadee7550b","Type":"ContainerStarted","Data":"67d0b1beee51807128af4787316e6197966ad653b2c5fbe81128eda05bb4a02c"} Jan 24 00:06:57 crc kubenswrapper[4676]: I0124 00:06:57.073045 4676 generic.go:334] "Generic (PLEG): container finished" podID="0b267c5b-c180-4b96-9223-65ed17bd8546" containerID="5108dd50bdc2072552bf66414bb7978d357b797d9814d722f14cc8bcc9f79471" exitCode=0 Jan 24 00:06:57 crc kubenswrapper[4676]: I0124 00:06:57.073106 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b267c5b-c180-4b96-9223-65ed17bd8546","Type":"ContainerDied","Data":"5108dd50bdc2072552bf66414bb7978d357b797d9814d722f14cc8bcc9f79471"} Jan 24 00:06:57 crc kubenswrapper[4676]: I0124 00:06:57.077992 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c280bb7c-e926-46c9-bb2c-d88c64634673","Type":"ContainerStarted","Data":"f6bdacd4b420bb177446b935f1e3846bc08e4a95c458b2ca6f67dc97f5745d33"} Jan 24 00:06:57 crc kubenswrapper[4676]: E0124 00:06:57.080018 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rxmrn" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" Jan 24 00:06:57 crc kubenswrapper[4676]: I0124 00:06:57.142448 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=9.142419904 podStartE2EDuration="9.142419904s" podCreationTimestamp="2026-01-24 00:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:06:57.127301425 +0000 UTC m=+201.157272436" watchObservedRunningTime="2026-01-24 00:06:57.142419904 +0000 UTC m=+201.172390895" Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.086542 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r4q22" event={"ID":"18335446-e572-4741-ad9e-e7aadee7550b","Type":"ContainerStarted","Data":"c401c2b5d5d45061d4021b54cdd393ea02dc63cf8afa174d9ad88d789864ac4d"} Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.108892 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-r4q22" podStartSLOduration=183.108874575 podStartE2EDuration="3m3.108874575s" podCreationTimestamp="2026-01-24 00:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:06:58.104445267 +0000 UTC m=+202.134416278" watchObservedRunningTime="2026-01-24 00:06:58.108874575 +0000 UTC m=+202.138845576" Jan 24 00:06:58 crc kubenswrapper[4676]: E0124 00:06:58.203704 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 24 00:06:58 crc kubenswrapper[4676]: E0124 00:06:58.203911 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5djr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-94fls_openshift-marketplace(236fa0ff-93a4-429d-9a2a-b1ae84167818): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 00:06:58 crc kubenswrapper[4676]: E0124 00:06:58.205124 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-94fls" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.366710 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.490531 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b267c5b-c180-4b96-9223-65ed17bd8546-kube-api-access\") pod \"0b267c5b-c180-4b96-9223-65ed17bd8546\" (UID: \"0b267c5b-c180-4b96-9223-65ed17bd8546\") " Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.490803 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b267c5b-c180-4b96-9223-65ed17bd8546-kubelet-dir\") pod \"0b267c5b-c180-4b96-9223-65ed17bd8546\" (UID: \"0b267c5b-c180-4b96-9223-65ed17bd8546\") " Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.490963 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b267c5b-c180-4b96-9223-65ed17bd8546-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0b267c5b-c180-4b96-9223-65ed17bd8546" (UID: "0b267c5b-c180-4b96-9223-65ed17bd8546"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.510030 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b267c5b-c180-4b96-9223-65ed17bd8546-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b267c5b-c180-4b96-9223-65ed17bd8546" (UID: "0b267c5b-c180-4b96-9223-65ed17bd8546"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.591791 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b267c5b-c180-4b96-9223-65ed17bd8546-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:58 crc kubenswrapper[4676]: I0124 00:06:58.591827 4676 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b267c5b-c180-4b96-9223-65ed17bd8546-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:06:59 crc kubenswrapper[4676]: I0124 00:06:59.095482 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 00:06:59 crc kubenswrapper[4676]: I0124 00:06:59.096510 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b267c5b-c180-4b96-9223-65ed17bd8546","Type":"ContainerDied","Data":"988abc66366e3cc44883733bf2fcb7439323f03e26b375f0822d70001d98da5a"} Jan 24 00:06:59 crc kubenswrapper[4676]: I0124 00:06:59.096633 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="988abc66366e3cc44883733bf2fcb7439323f03e26b375f0822d70001d98da5a" Jan 24 00:06:59 crc kubenswrapper[4676]: E0124 00:06:59.099953 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-94fls" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" Jan 24 00:06:59 crc kubenswrapper[4676]: E0124 00:06:59.704172 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 24 00:06:59 crc kubenswrapper[4676]: E0124 00:06:59.704539 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49sw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pnqjd_openshift-marketplace(891b78f7-509c-4e8d-b846-52881396a64d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 00:06:59 crc kubenswrapper[4676]: E0124 00:06:59.707179 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pnqjd" podUID="891b78f7-509c-4e8d-b846-52881396a64d" Jan 24 00:07:00 crc kubenswrapper[4676]: E0124 00:07:00.100717 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pnqjd" podUID="891b78f7-509c-4e8d-b846-52881396a64d" Jan 24 00:07:07 crc kubenswrapper[4676]: I0124 00:07:07.149700 4676 generic.go:334] "Generic (PLEG): container finished" podID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerID="664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233" exitCode=0 Jan 24 00:07:07 crc kubenswrapper[4676]: I0124 00:07:07.149797 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645jr" event={"ID":"10798203-b391-4a87-98a7-b41db2bbb0e2","Type":"ContainerDied","Data":"664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233"} Jan 24 00:07:07 crc kubenswrapper[4676]: I0124 00:07:07.156118 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8x4" event={"ID":"2247b246-d06f-4211-ab24-ff0ee05953b9","Type":"ContainerStarted","Data":"0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3"} Jan 24 00:07:07 crc kubenswrapper[4676]: I0124 00:07:07.162691 4676 generic.go:334] "Generic (PLEG): container finished" podID="920c325a-f36b-4162-9d37-ea88124be938" containerID="c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b" exitCode=0 Jan 24 00:07:07 crc kubenswrapper[4676]: I0124 00:07:07.163037 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7nsm" event={"ID":"920c325a-f36b-4162-9d37-ea88124be938","Type":"ContainerDied","Data":"c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b"} Jan 24 00:07:08 crc kubenswrapper[4676]: I0124 00:07:08.169364 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7nsm" event={"ID":"920c325a-f36b-4162-9d37-ea88124be938","Type":"ContainerStarted","Data":"cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211"} Jan 24 00:07:08 crc kubenswrapper[4676]: I0124 00:07:08.171308 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645jr" event={"ID":"10798203-b391-4a87-98a7-b41db2bbb0e2","Type":"ContainerStarted","Data":"1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449"} Jan 24 00:07:08 crc kubenswrapper[4676]: I0124 00:07:08.172678 4676 generic.go:334] "Generic (PLEG): container finished" podID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerID="0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3" exitCode=0 Jan 24 00:07:08 crc kubenswrapper[4676]: I0124 00:07:08.172719 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8x4" event={"ID":"2247b246-d06f-4211-ab24-ff0ee05953b9","Type":"ContainerDied","Data":"0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3"} Jan 24 00:07:08 crc kubenswrapper[4676]: I0124 00:07:08.220751 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l7nsm" podStartSLOduration=3.7519719719999998 podStartE2EDuration="1m7.22073667s" podCreationTimestamp="2026-01-24 00:06:01 +0000 UTC" firstStartedPulling="2026-01-24 00:06:04.184075117 +0000 UTC m=+148.214046118" lastFinishedPulling="2026-01-24 00:07:07.652839805 +0000 UTC m=+211.682810816" observedRunningTime="2026-01-24 00:07:08.199105779 +0000 UTC m=+212.229076790" watchObservedRunningTime="2026-01-24 00:07:08.22073667 +0000 UTC m=+212.250707671" Jan 24 00:07:08 crc kubenswrapper[4676]: I0124 00:07:08.222607 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-645jr" podStartSLOduration=3.96772861 podStartE2EDuration="1m6.222601487s" podCreationTimestamp="2026-01-24 00:06:02 +0000 UTC" firstStartedPulling="2026-01-24 00:06:05.360618436 +0000 UTC m=+149.390589437" lastFinishedPulling="2026-01-24 00:07:07.615491273 +0000 UTC m=+211.645462314" observedRunningTime="2026-01-24 00:07:08.21903555 +0000 UTC m=+212.249006551" watchObservedRunningTime="2026-01-24 00:07:08.222601487 +0000 UTC m=+212.252572488" Jan 24 00:07:09 crc kubenswrapper[4676]: I0124 00:07:09.182492 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8x4" event={"ID":"2247b246-d06f-4211-ab24-ff0ee05953b9","Type":"ContainerStarted","Data":"0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3"} Jan 24 00:07:09 crc kubenswrapper[4676]: I0124 00:07:09.364144 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:07:09 crc kubenswrapper[4676]: I0124 00:07:09.364198 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:07:09 crc kubenswrapper[4676]: I0124 00:07:09.364239 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:07:09 crc kubenswrapper[4676]: I0124 00:07:09.364739 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:07:09 crc kubenswrapper[4676]: I0124 00:07:09.364835 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d" gracePeriod=600 Jan 24 00:07:10 crc kubenswrapper[4676]: I0124 00:07:10.190294 4676 generic.go:334] "Generic (PLEG): container finished" podID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerID="d84cf120443faf814099d3b57961bbb33e517c1b14f12a7cc4e82668a41edcc9" exitCode=0 Jan 24 00:07:10 crc kubenswrapper[4676]: I0124 00:07:10.190383 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxs47" event={"ID":"592859d8-1f7e-4e35-acc9-635e130ad2d2","Type":"ContainerDied","Data":"d84cf120443faf814099d3b57961bbb33e517c1b14f12a7cc4e82668a41edcc9"} Jan 24 00:07:10 crc kubenswrapper[4676]: I0124 00:07:10.195557 4676 generic.go:334] "Generic (PLEG): container finished" podID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerID="29117c1fdb99e96004bdef61fde30628f1781d37232b198d232fe3b6cf30fc53" exitCode=0 Jan 24 00:07:10 crc kubenswrapper[4676]: I0124 00:07:10.195675 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lq6l" event={"ID":"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59","Type":"ContainerDied","Data":"29117c1fdb99e96004bdef61fde30628f1781d37232b198d232fe3b6cf30fc53"} Jan 24 00:07:10 crc kubenswrapper[4676]: I0124 00:07:10.201549 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d" exitCode=0 Jan 24 00:07:10 crc kubenswrapper[4676]: I0124 00:07:10.201588 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d"} Jan 24 00:07:10 crc kubenswrapper[4676]: I0124 00:07:10.223927 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sv8x4" podStartSLOduration=5.003844258 podStartE2EDuration="1m8.223894488s" podCreationTimestamp="2026-01-24 00:06:02 +0000 UTC" firstStartedPulling="2026-01-24 00:06:05.400957037 +0000 UTC m=+149.430928038" lastFinishedPulling="2026-01-24 00:07:08.621007267 +0000 UTC m=+212.650978268" observedRunningTime="2026-01-24 00:07:09.199449159 +0000 UTC m=+213.229420160" watchObservedRunningTime="2026-01-24 00:07:10.223894488 +0000 UTC m=+214.253865489" Jan 24 00:07:11 crc kubenswrapper[4676]: I0124 00:07:11.209881 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lq6l" event={"ID":"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59","Type":"ContainerStarted","Data":"c124752b28c2c30e69265c6be80e79a712fa694ec02324e9b77ad0b5952e7fe3"} Jan 24 00:07:11 crc kubenswrapper[4676]: I0124 00:07:11.213785 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"6a2899a2bd48b07d11a54988ae27807805689a40bb41df1e769ca4da39298f4c"} Jan 24 00:07:11 crc kubenswrapper[4676]: I0124 00:07:11.216447 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxs47" event={"ID":"592859d8-1f7e-4e35-acc9-635e130ad2d2","Type":"ContainerStarted","Data":"c53d6e9162fb162b724098a40f36dec02c2bb75d99128d9311568c19c9c217a4"} Jan 24 00:07:11 crc kubenswrapper[4676]: I0124 00:07:11.235178 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9lq6l" podStartSLOduration=4.535611633 podStartE2EDuration="1m12.235163848s" podCreationTimestamp="2026-01-24 00:05:59 +0000 UTC" firstStartedPulling="2026-01-24 00:06:03.032198238 +0000 UTC m=+147.062169239" lastFinishedPulling="2026-01-24 00:07:10.731750453 +0000 UTC m=+214.761721454" observedRunningTime="2026-01-24 00:07:11.234319548 +0000 UTC m=+215.264290559" watchObservedRunningTime="2026-01-24 00:07:11.235163848 +0000 UTC m=+215.265134849" Jan 24 00:07:11 crc kubenswrapper[4676]: I0124 00:07:11.277820 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lxs47" podStartSLOduration=4.607218315 podStartE2EDuration="1m13.277799638s" podCreationTimestamp="2026-01-24 00:05:58 +0000 UTC" firstStartedPulling="2026-01-24 00:06:01.945548796 +0000 UTC m=+145.975519797" lastFinishedPulling="2026-01-24 00:07:10.616130119 +0000 UTC m=+214.646101120" observedRunningTime="2026-01-24 00:07:11.277500208 +0000 UTC m=+215.307471209" watchObservedRunningTime="2026-01-24 00:07:11.277799638 +0000 UTC m=+215.307770649" Jan 24 00:07:11 crc kubenswrapper[4676]: I0124 00:07:11.506421 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:07:11 crc kubenswrapper[4676]: I0124 00:07:11.506505 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:07:12 crc kubenswrapper[4676]: I0124 00:07:12.125592 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:07:12 crc kubenswrapper[4676]: I0124 00:07:12.275810 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:07:12 crc kubenswrapper[4676]: I0124 00:07:12.762438 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:07:12 crc kubenswrapper[4676]: I0124 00:07:12.762507 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:07:12 crc kubenswrapper[4676]: I0124 00:07:12.841515 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:07:12 crc kubenswrapper[4676]: I0124 00:07:12.841885 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:07:13 crc kubenswrapper[4676]: I0124 00:07:13.815024 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-645jr" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="registry-server" probeResult="failure" output=< Jan 24 00:07:13 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:07:13 crc kubenswrapper[4676]: > Jan 24 00:07:13 crc kubenswrapper[4676]: I0124 00:07:13.890543 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sv8x4" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="registry-server" probeResult="failure" output=< Jan 24 00:07:13 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:07:13 crc kubenswrapper[4676]: > Jan 24 00:07:15 crc kubenswrapper[4676]: I0124 00:07:15.240724 4676 generic.go:334] "Generic (PLEG): container finished" podID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerID="932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c" exitCode=0 Jan 24 00:07:15 crc kubenswrapper[4676]: I0124 00:07:15.241136 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmrn" event={"ID":"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd","Type":"ContainerDied","Data":"932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c"} Jan 24 00:07:15 crc kubenswrapper[4676]: I0124 00:07:15.244993 4676 generic.go:334] "Generic (PLEG): container finished" podID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerID="634fec8b6271b2f8d9c4f7db61cb6eed664e6577bbd44282bae63b416d050fcb" exitCode=0 Jan 24 00:07:15 crc kubenswrapper[4676]: I0124 00:07:15.245033 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94fls" event={"ID":"236fa0ff-93a4-429d-9a2a-b1ae84167818","Type":"ContainerDied","Data":"634fec8b6271b2f8d9c4f7db61cb6eed664e6577bbd44282bae63b416d050fcb"} Jan 24 00:07:16 crc kubenswrapper[4676]: I0124 00:07:16.995271 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" podUID="89435826-645d-48a2-aa3b-f5c42003dcbe" containerName="oauth-openshift" containerID="cri-o://b436a79b5d8ef5e9fda6092960f14ec8555b8c360d001d483efb56a39a863e65" gracePeriod=15 Jan 24 00:07:17 crc kubenswrapper[4676]: I0124 00:07:17.263713 4676 generic.go:334] "Generic (PLEG): container finished" podID="891b78f7-509c-4e8d-b846-52881396a64d" containerID="ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903" exitCode=0 Jan 24 00:07:17 crc kubenswrapper[4676]: I0124 00:07:17.263796 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqjd" event={"ID":"891b78f7-509c-4e8d-b846-52881396a64d","Type":"ContainerDied","Data":"ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903"} Jan 24 00:07:17 crc kubenswrapper[4676]: I0124 00:07:17.277135 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmrn" event={"ID":"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd","Type":"ContainerStarted","Data":"eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0"} Jan 24 00:07:17 crc kubenswrapper[4676]: I0124 00:07:17.279413 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94fls" event={"ID":"236fa0ff-93a4-429d-9a2a-b1ae84167818","Type":"ContainerStarted","Data":"f8710f8d49b133576915dd9b46891ac9e00701311c21252958d555d9eb5bf410"} Jan 24 00:07:17 crc kubenswrapper[4676]: I0124 00:07:17.315224 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-94fls" podStartSLOduration=4.369324907 podStartE2EDuration="1m16.315198688s" podCreationTimestamp="2026-01-24 00:06:01 +0000 UTC" firstStartedPulling="2026-01-24 00:06:04.185214503 +0000 UTC m=+148.215185504" lastFinishedPulling="2026-01-24 00:07:16.131088284 +0000 UTC m=+220.161059285" observedRunningTime="2026-01-24 00:07:17.311018888 +0000 UTC m=+221.340989899" watchObservedRunningTime="2026-01-24 00:07:17.315198688 +0000 UTC m=+221.345169709" Jan 24 00:07:17 crc kubenswrapper[4676]: I0124 00:07:17.337455 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxmrn" podStartSLOduration=5.360416809 podStartE2EDuration="1m18.337428011s" podCreationTimestamp="2026-01-24 00:05:59 +0000 UTC" firstStartedPulling="2026-01-24 00:06:03.079560954 +0000 UTC m=+147.109531955" lastFinishedPulling="2026-01-24 00:07:16.056572146 +0000 UTC m=+220.086543157" observedRunningTime="2026-01-24 00:07:17.333473019 +0000 UTC m=+221.363444030" watchObservedRunningTime="2026-01-24 00:07:17.337428011 +0000 UTC m=+221.367399002" Jan 24 00:07:18 crc kubenswrapper[4676]: I0124 00:07:18.286638 4676 generic.go:334] "Generic (PLEG): container finished" podID="89435826-645d-48a2-aa3b-f5c42003dcbe" containerID="b436a79b5d8ef5e9fda6092960f14ec8555b8c360d001d483efb56a39a863e65" exitCode=0 Jan 24 00:07:18 crc kubenswrapper[4676]: I0124 00:07:18.286783 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" event={"ID":"89435826-645d-48a2-aa3b-f5c42003dcbe","Type":"ContainerDied","Data":"b436a79b5d8ef5e9fda6092960f14ec8555b8c360d001d483efb56a39a863e65"} Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.031068 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071691 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlm94\" (UniqueName: \"kubernetes.io/projected/89435826-645d-48a2-aa3b-f5c42003dcbe-kube-api-access-tlm94\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071747 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-serving-cert\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071780 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-login\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071823 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-service-ca\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071843 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-ocp-branding-template\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071872 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-router-certs\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071893 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-error\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071911 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-session\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071935 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-cliconfig\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071955 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-provider-selection\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.071982 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-idp-0-file-data\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.072020 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-dir\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.072056 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-policies\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.072075 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-trusted-ca-bundle\") pod \"89435826-645d-48a2-aa3b-f5c42003dcbe\" (UID: \"89435826-645d-48a2-aa3b-f5c42003dcbe\") " Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.072396 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.074881 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.075284 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.075908 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.093545 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-zscbn"] Jan 24 00:07:19 crc kubenswrapper[4676]: E0124 00:07:19.094021 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b267c5b-c180-4b96-9223-65ed17bd8546" containerName="pruner" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.094118 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b267c5b-c180-4b96-9223-65ed17bd8546" containerName="pruner" Jan 24 00:07:19 crc kubenswrapper[4676]: E0124 00:07:19.094206 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89435826-645d-48a2-aa3b-f5c42003dcbe" containerName="oauth-openshift" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.094284 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="89435826-645d-48a2-aa3b-f5c42003dcbe" containerName="oauth-openshift" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.094558 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="89435826-645d-48a2-aa3b-f5c42003dcbe" containerName="oauth-openshift" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.094652 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b267c5b-c180-4b96-9223-65ed17bd8546" containerName="pruner" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.095516 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.094279 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.102065 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-zscbn"] Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.106959 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89435826-645d-48a2-aa3b-f5c42003dcbe-kube-api-access-tlm94" (OuterVolumeSpecName: "kube-api-access-tlm94") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "kube-api-access-tlm94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.113456 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.114327 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.116004 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.116653 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.124473 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.128472 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.128817 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.129179 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "89435826-645d-48a2-aa3b-f5c42003dcbe" (UID: "89435826-645d-48a2-aa3b-f5c42003dcbe"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.173261 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.173574 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29c7\" (UniqueName: \"kubernetes.io/projected/66786051-105a-4770-ba4f-25f9b16bc2bb-kube-api-access-v29c7\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.173699 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.173805 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.173909 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.174012 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.174305 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-audit-policies\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.174450 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66786051-105a-4770-ba4f-25f9b16bc2bb-audit-dir\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.174573 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.174146 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.174684 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.174740 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.174900 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.175022 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.175362 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.175523 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.175657 4676 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.175739 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.175805 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlm94\" (UniqueName: \"kubernetes.io/projected/89435826-645d-48a2-aa3b-f5c42003dcbe-kube-api-access-tlm94\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.175881 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.175954 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176010 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176066 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176127 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176187 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176261 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176324 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176415 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176483 4676 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/89435826-645d-48a2-aa3b-f5c42003dcbe-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.176542 4676 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89435826-645d-48a2-aa3b-f5c42003dcbe-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.218347 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278273 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278363 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v29c7\" (UniqueName: \"kubernetes.io/projected/66786051-105a-4770-ba4f-25f9b16bc2bb-kube-api-access-v29c7\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278413 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278443 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278468 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278492 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278523 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-audit-policies\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278544 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66786051-105a-4770-ba4f-25f9b16bc2bb-audit-dir\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278564 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278584 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278612 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278636 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278660 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278704 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.278932 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66786051-105a-4770-ba4f-25f9b16bc2bb-audit-dir\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.281053 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.281856 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-audit-policies\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.282825 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.282885 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.286009 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.286430 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.286828 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.286944 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-session\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.287711 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.288078 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-error\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.289926 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-user-template-login\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.294355 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66786051-105a-4770-ba4f-25f9b16bc2bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.298687 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v29c7\" (UniqueName: \"kubernetes.io/projected/66786051-105a-4770-ba4f-25f9b16bc2bb-kube-api-access-v29c7\") pod \"oauth-openshift-79b5c48459-zscbn\" (UID: \"66786051-105a-4770-ba4f-25f9b16bc2bb\") " pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.300164 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" event={"ID":"89435826-645d-48a2-aa3b-f5c42003dcbe","Type":"ContainerDied","Data":"f8acee532eb12a14f5c8f001c0b4e4969213cd1ade4bba8504365d2ac03f9941"} Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.300248 4676 scope.go:117] "RemoveContainer" containerID="b436a79b5d8ef5e9fda6092960f14ec8555b8c360d001d483efb56a39a863e65" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.300755 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kvcv8" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.344745 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvcv8"] Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.345182 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvcv8"] Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.352540 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.432722 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.432944 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.463055 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.478335 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.720560 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.722318 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.763593 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:07:19 crc kubenswrapper[4676]: I0124 00:07:19.984662 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79b5c48459-zscbn"] Jan 24 00:07:20 crc kubenswrapper[4676]: I0124 00:07:20.266308 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89435826-645d-48a2-aa3b-f5c42003dcbe" path="/var/lib/kubelet/pods/89435826-645d-48a2-aa3b-f5c42003dcbe/volumes" Jan 24 00:07:20 crc kubenswrapper[4676]: I0124 00:07:20.307809 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" event={"ID":"66786051-105a-4770-ba4f-25f9b16bc2bb","Type":"ContainerStarted","Data":"b7ccf8f1b0c31882e06a07a8fca815211dc5741c6152d88fac1459c3e3197010"} Jan 24 00:07:20 crc kubenswrapper[4676]: I0124 00:07:20.361149 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:07:21 crc kubenswrapper[4676]: I0124 00:07:21.330471 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9lq6l"] Jan 24 00:07:21 crc kubenswrapper[4676]: I0124 00:07:21.368599 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:07:21 crc kubenswrapper[4676]: I0124 00:07:21.903276 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:07:21 crc kubenswrapper[4676]: I0124 00:07:21.903640 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:07:21 crc kubenswrapper[4676]: I0124 00:07:21.949958 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.327864 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqjd" event={"ID":"891b78f7-509c-4e8d-b846-52881396a64d","Type":"ContainerStarted","Data":"fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461"} Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.329745 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" event={"ID":"66786051-105a-4770-ba4f-25f9b16bc2bb","Type":"ContainerStarted","Data":"528a5f11d379d8a4ae5dde74036af662d2236007e58088a64dec654b7996ed80"} Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.354516 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pnqjd" podStartSLOduration=4.957957099 podStartE2EDuration="1m23.354487407s" podCreationTimestamp="2026-01-24 00:05:59 +0000 UTC" firstStartedPulling="2026-01-24 00:06:03.009726648 +0000 UTC m=+147.039697649" lastFinishedPulling="2026-01-24 00:07:21.406256936 +0000 UTC m=+225.436227957" observedRunningTime="2026-01-24 00:07:22.350421322 +0000 UTC m=+226.380392353" watchObservedRunningTime="2026-01-24 00:07:22.354487407 +0000 UTC m=+226.384458428" Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.396722 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.419019 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" podStartSLOduration=31.418994948 podStartE2EDuration="31.418994948s" podCreationTimestamp="2026-01-24 00:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:07:22.381999208 +0000 UTC m=+226.411970239" watchObservedRunningTime="2026-01-24 00:07:22.418994948 +0000 UTC m=+226.448965989" Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.803921 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.867763 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.923975 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:07:22 crc kubenswrapper[4676]: I0124 00:07:22.968586 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:07:23 crc kubenswrapper[4676]: I0124 00:07:23.336121 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9lq6l" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerName="registry-server" containerID="cri-o://c124752b28c2c30e69265c6be80e79a712fa694ec02324e9b77ad0b5952e7fe3" gracePeriod=2 Jan 24 00:07:23 crc kubenswrapper[4676]: I0124 00:07:23.337211 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:23 crc kubenswrapper[4676]: I0124 00:07:23.342406 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79b5c48459-zscbn" Jan 24 00:07:23 crc kubenswrapper[4676]: I0124 00:07:23.529485 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94fls"] Jan 24 00:07:24 crc kubenswrapper[4676]: I0124 00:07:24.342625 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-94fls" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerName="registry-server" containerID="cri-o://f8710f8d49b133576915dd9b46891ac9e00701311c21252958d555d9eb5bf410" gracePeriod=2 Jan 24 00:07:25 crc kubenswrapper[4676]: I0124 00:07:25.930218 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv8x4"] Jan 24 00:07:25 crc kubenswrapper[4676]: I0124 00:07:25.931066 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sv8x4" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="registry-server" containerID="cri-o://0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3" gracePeriod=2 Jan 24 00:07:26 crc kubenswrapper[4676]: I0124 00:07:26.360329 4676 generic.go:334] "Generic (PLEG): container finished" podID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerID="c124752b28c2c30e69265c6be80e79a712fa694ec02324e9b77ad0b5952e7fe3" exitCode=0 Jan 24 00:07:26 crc kubenswrapper[4676]: I0124 00:07:26.360424 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lq6l" event={"ID":"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59","Type":"ContainerDied","Data":"c124752b28c2c30e69265c6be80e79a712fa694ec02324e9b77ad0b5952e7fe3"} Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.370066 4676 generic.go:334] "Generic (PLEG): container finished" podID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerID="f8710f8d49b133576915dd9b46891ac9e00701311c21252958d555d9eb5bf410" exitCode=0 Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.370241 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94fls" event={"ID":"236fa0ff-93a4-429d-9a2a-b1ae84167818","Type":"ContainerDied","Data":"f8710f8d49b133576915dd9b46891ac9e00701311c21252958d555d9eb5bf410"} Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.505502 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.603371 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-utilities\") pod \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.604017 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdl5n\" (UniqueName: \"kubernetes.io/projected/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-kube-api-access-jdl5n\") pod \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.604150 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-catalog-content\") pod \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\" (UID: \"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59\") " Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.604595 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-utilities" (OuterVolumeSpecName: "utilities") pod "53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" (UID: "53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.604888 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.615624 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-kube-api-access-jdl5n" (OuterVolumeSpecName: "kube-api-access-jdl5n") pod "53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" (UID: "53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59"). InnerVolumeSpecName "kube-api-access-jdl5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.660980 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" (UID: "53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.707263 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.707303 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdl5n\" (UniqueName: \"kubernetes.io/projected/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59-kube-api-access-jdl5n\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:27 crc kubenswrapper[4676]: I0124 00:07:27.874700 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.010483 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-utilities\") pod \"236fa0ff-93a4-429d-9a2a-b1ae84167818\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.010597 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5djr\" (UniqueName: \"kubernetes.io/projected/236fa0ff-93a4-429d-9a2a-b1ae84167818-kube-api-access-g5djr\") pod \"236fa0ff-93a4-429d-9a2a-b1ae84167818\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.010677 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-catalog-content\") pod \"236fa0ff-93a4-429d-9a2a-b1ae84167818\" (UID: \"236fa0ff-93a4-429d-9a2a-b1ae84167818\") " Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.011254 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-utilities" (OuterVolumeSpecName: "utilities") pod "236fa0ff-93a4-429d-9a2a-b1ae84167818" (UID: "236fa0ff-93a4-429d-9a2a-b1ae84167818"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.013220 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236fa0ff-93a4-429d-9a2a-b1ae84167818-kube-api-access-g5djr" (OuterVolumeSpecName: "kube-api-access-g5djr") pod "236fa0ff-93a4-429d-9a2a-b1ae84167818" (UID: "236fa0ff-93a4-429d-9a2a-b1ae84167818"). InnerVolumeSpecName "kube-api-access-g5djr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.031934 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "236fa0ff-93a4-429d-9a2a-b1ae84167818" (UID: "236fa0ff-93a4-429d-9a2a-b1ae84167818"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.112336 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5djr\" (UniqueName: \"kubernetes.io/projected/236fa0ff-93a4-429d-9a2a-b1ae84167818-kube-api-access-g5djr\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.112380 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.112400 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/236fa0ff-93a4-429d-9a2a-b1ae84167818-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.346747 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.376689 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lq6l" event={"ID":"53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59","Type":"ContainerDied","Data":"240a15f7e6730a7c6ad0337131f8a5c437c590dd34ae29a62a731e625e9f6839"} Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.376746 4676 scope.go:117] "RemoveContainer" containerID="c124752b28c2c30e69265c6be80e79a712fa694ec02324e9b77ad0b5952e7fe3" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.376891 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lq6l" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.381781 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94fls" event={"ID":"236fa0ff-93a4-429d-9a2a-b1ae84167818","Type":"ContainerDied","Data":"0a7c742121571c7169e4b719618bc512b51f0e77919d08a6e5c83dac8827150c"} Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.381890 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94fls" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.383444 4676 generic.go:334] "Generic (PLEG): container finished" podID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerID="0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3" exitCode=0 Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.383471 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8x4" event={"ID":"2247b246-d06f-4211-ab24-ff0ee05953b9","Type":"ContainerDied","Data":"0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3"} Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.383491 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv8x4" event={"ID":"2247b246-d06f-4211-ab24-ff0ee05953b9","Type":"ContainerDied","Data":"219fafaa737e407cadd362820da7774d28e18cc7cb1ebb7f3b2b92c72a3323f6"} Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.384429 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv8x4" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.396859 4676 scope.go:117] "RemoveContainer" containerID="29117c1fdb99e96004bdef61fde30628f1781d37232b198d232fe3b6cf30fc53" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.403948 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9lq6l"] Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.403993 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9lq6l"] Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.425644 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94fls"] Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.426846 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-94fls"] Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.432146 4676 scope.go:117] "RemoveContainer" containerID="34fdffdd9a497f9a6cd5b90c76fdd621cc1104ee041982343a04412f81b7f452" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.453071 4676 scope.go:117] "RemoveContainer" containerID="f8710f8d49b133576915dd9b46891ac9e00701311c21252958d555d9eb5bf410" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.474311 4676 scope.go:117] "RemoveContainer" containerID="634fec8b6271b2f8d9c4f7db61cb6eed664e6577bbd44282bae63b416d050fcb" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.489245 4676 scope.go:117] "RemoveContainer" containerID="42507e3dd923f3d05f90ce7ba013d39b4e6323bf6515e461e6d463d4272af52b" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.505451 4676 scope.go:117] "RemoveContainer" containerID="0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.519677 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6gg\" (UniqueName: \"kubernetes.io/projected/2247b246-d06f-4211-ab24-ff0ee05953b9-kube-api-access-9n6gg\") pod \"2247b246-d06f-4211-ab24-ff0ee05953b9\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.519736 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-utilities\") pod \"2247b246-d06f-4211-ab24-ff0ee05953b9\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.519861 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-catalog-content\") pod \"2247b246-d06f-4211-ab24-ff0ee05953b9\" (UID: \"2247b246-d06f-4211-ab24-ff0ee05953b9\") " Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.521396 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-utilities" (OuterVolumeSpecName: "utilities") pod "2247b246-d06f-4211-ab24-ff0ee05953b9" (UID: "2247b246-d06f-4211-ab24-ff0ee05953b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.524474 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2247b246-d06f-4211-ab24-ff0ee05953b9-kube-api-access-9n6gg" (OuterVolumeSpecName: "kube-api-access-9n6gg") pod "2247b246-d06f-4211-ab24-ff0ee05953b9" (UID: "2247b246-d06f-4211-ab24-ff0ee05953b9"). InnerVolumeSpecName "kube-api-access-9n6gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.527712 4676 scope.go:117] "RemoveContainer" containerID="0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.543834 4676 scope.go:117] "RemoveContainer" containerID="07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.560567 4676 scope.go:117] "RemoveContainer" containerID="0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3" Jan 24 00:07:28 crc kubenswrapper[4676]: E0124 00:07:28.561200 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3\": container with ID starting with 0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3 not found: ID does not exist" containerID="0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.561231 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3"} err="failed to get container status \"0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3\": rpc error: code = NotFound desc = could not find container \"0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3\": container with ID starting with 0b195c50824310367359504ebc2c1523df4dda14dc8c6d70328cd706ff0670e3 not found: ID does not exist" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.561253 4676 scope.go:117] "RemoveContainer" containerID="0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3" Jan 24 00:07:28 crc kubenswrapper[4676]: E0124 00:07:28.561766 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3\": container with ID starting with 0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3 not found: ID does not exist" containerID="0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.561822 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3"} err="failed to get container status \"0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3\": rpc error: code = NotFound desc = could not find container \"0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3\": container with ID starting with 0ef99f4ef6f0988afd2cf781f45853cf9ddb0aace684a1d3f7d73355a726b8d3 not found: ID does not exist" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.561860 4676 scope.go:117] "RemoveContainer" containerID="07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf" Jan 24 00:07:28 crc kubenswrapper[4676]: E0124 00:07:28.562275 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf\": container with ID starting with 07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf not found: ID does not exist" containerID="07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.562310 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf"} err="failed to get container status \"07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf\": rpc error: code = NotFound desc = could not find container \"07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf\": container with ID starting with 07acdb0762573bb2b8235a6edcf0cfe4842b7bcb639741acfb88ccb64944ecbf not found: ID does not exist" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.621426 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6gg\" (UniqueName: \"kubernetes.io/projected/2247b246-d06f-4211-ab24-ff0ee05953b9-kube-api-access-9n6gg\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.621628 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.646085 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2247b246-d06f-4211-ab24-ff0ee05953b9" (UID: "2247b246-d06f-4211-ab24-ff0ee05953b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.712135 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv8x4"] Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.725233 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2247b246-d06f-4211-ab24-ff0ee05953b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:28 crc kubenswrapper[4676]: I0124 00:07:28.727982 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sv8x4"] Jan 24 00:07:29 crc kubenswrapper[4676]: I0124 00:07:29.948294 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:07:29 crc kubenswrapper[4676]: I0124 00:07:29.948794 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:07:30 crc kubenswrapper[4676]: I0124 00:07:30.017338 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:07:30 crc kubenswrapper[4676]: I0124 00:07:30.267727 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" path="/var/lib/kubelet/pods/2247b246-d06f-4211-ab24-ff0ee05953b9/volumes" Jan 24 00:07:30 crc kubenswrapper[4676]: I0124 00:07:30.268459 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" path="/var/lib/kubelet/pods/236fa0ff-93a4-429d-9a2a-b1ae84167818/volumes" Jan 24 00:07:30 crc kubenswrapper[4676]: I0124 00:07:30.269111 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" path="/var/lib/kubelet/pods/53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59/volumes" Jan 24 00:07:30 crc kubenswrapper[4676]: I0124 00:07:30.478505 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.746704 4676 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747286 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerName="extract-content" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747297 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerName="extract-content" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747305 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerName="extract-utilities" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747310 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerName="extract-utilities" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747320 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="extract-content" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747326 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="extract-content" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747333 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerName="extract-utilities" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747339 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerName="extract-utilities" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747351 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747357 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747366 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerName="extract-content" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747371 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerName="extract-content" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747382 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="extract-utilities" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747388 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="extract-utilities" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747409 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747417 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.747427 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747432 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747522 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2247b246-d06f-4211-ab24-ff0ee05953b9" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747535 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ec2f7b-a5ba-41ba-9e8b-2a04032f7e59" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747543 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="236fa0ff-93a4-429d-9a2a-b1ae84167818" containerName="registry-server" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747846 4676 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.747967 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748073 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142" gracePeriod=15 Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748137 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa" gracePeriod=15 Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748175 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b" gracePeriod=15 Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748194 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a" gracePeriod=15 Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748137 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9" gracePeriod=15 Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748578 4676 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.748762 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748773 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.748780 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748786 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.748797 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748803 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.748809 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748815 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.748829 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748834 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.748843 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748848 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748931 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748940 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748948 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748958 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.748965 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 00:07:33 crc kubenswrapper[4676]: E0124 00:07:33.749061 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.749067 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.749144 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.787492 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.842588 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.842633 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.842653 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.842673 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.842962 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.843018 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.843059 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.843168 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.944809 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.944874 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.944902 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.944952 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.944985 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.944986 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945031 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945069 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945008 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945013 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945043 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945124 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945147 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945069 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945186 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:33 crc kubenswrapper[4676]: I0124 00:07:33.945207 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.085429 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:07:34 crc kubenswrapper[4676]: W0124 00:07:34.101637 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-976837c5d5eaa84109b1d0a099c5a8b73d4b2e0dae46767124489e7eb0947d07 WatchSource:0}: Error finding container 976837c5d5eaa84109b1d0a099c5a8b73d4b2e0dae46767124489e7eb0947d07: Status 404 returned error can't find the container with id 976837c5d5eaa84109b1d0a099c5a8b73d4b2e0dae46767124489e7eb0947d07 Jan 24 00:07:34 crc kubenswrapper[4676]: E0124 00:07:34.104201 4676 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d8214b6bb6799 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 00:07:34.103164825 +0000 UTC m=+238.133135856,LastTimestamp:2026-01-24 00:07:34.103164825 +0000 UTC m=+238.133135856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.433751 4676 generic.go:334] "Generic (PLEG): container finished" podID="c280bb7c-e926-46c9-bb2c-d88c64634673" containerID="f6bdacd4b420bb177446b935f1e3846bc08e4a95c458b2ca6f67dc97f5745d33" exitCode=0 Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.433815 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c280bb7c-e926-46c9-bb2c-d88c64634673","Type":"ContainerDied","Data":"f6bdacd4b420bb177446b935f1e3846bc08e4a95c458b2ca6f67dc97f5745d33"} Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.434439 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.434817 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.436551 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.437697 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.438336 4676 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a" exitCode=0 Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.438359 4676 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9" exitCode=0 Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.438369 4676 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa" exitCode=0 Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.438394 4676 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b" exitCode=2 Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.438470 4676 scope.go:117] "RemoveContainer" containerID="5c47cad0fa09af5d6054694c41c2fd3ce35fdc093e673df7283b56ad009a05fc" Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.440168 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"31446979da9b2814d80beb1cd72c15277eaaac69a2a2c9676d049a6a8acf10c5"} Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.440201 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"976837c5d5eaa84109b1d0a099c5a8b73d4b2e0dae46767124489e7eb0947d07"} Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.440954 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:34 crc kubenswrapper[4676]: I0124 00:07:34.441247 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:35 crc kubenswrapper[4676]: E0124 00:07:35.296006 4676 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d8214b6bb6799 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 00:07:34.103164825 +0000 UTC m=+238.133135856,LastTimestamp:2026-01-24 00:07:34.103164825 +0000 UTC m=+238.133135856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.449414 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.697765 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.698713 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.698867 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.766106 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-var-lock\") pod \"c280bb7c-e926-46c9-bb2c-d88c64634673\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.766203 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-kubelet-dir\") pod \"c280bb7c-e926-46c9-bb2c-d88c64634673\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.766236 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c280bb7c-e926-46c9-bb2c-d88c64634673-kube-api-access\") pod \"c280bb7c-e926-46c9-bb2c-d88c64634673\" (UID: \"c280bb7c-e926-46c9-bb2c-d88c64634673\") " Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.766622 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-var-lock" (OuterVolumeSpecName: "var-lock") pod "c280bb7c-e926-46c9-bb2c-d88c64634673" (UID: "c280bb7c-e926-46c9-bb2c-d88c64634673"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.766645 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c280bb7c-e926-46c9-bb2c-d88c64634673" (UID: "c280bb7c-e926-46c9-bb2c-d88c64634673"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.784436 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c280bb7c-e926-46c9-bb2c-d88c64634673-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c280bb7c-e926-46c9-bb2c-d88c64634673" (UID: "c280bb7c-e926-46c9-bb2c-d88c64634673"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.870140 4676 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.870173 4676 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c280bb7c-e926-46c9-bb2c-d88c64634673-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:35 crc kubenswrapper[4676]: I0124 00:07:35.870186 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c280bb7c-e926-46c9-bb2c-d88c64634673-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.113151 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.114534 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.115883 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.116519 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.117168 4676 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.258287 4676 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.258573 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.258799 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.274142 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.274226 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.274253 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.274327 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.274580 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.274668 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.275115 4676 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.275139 4676 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.275197 4676 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.457397 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c280bb7c-e926-46c9-bb2c-d88c64634673","Type":"ContainerDied","Data":"4ecf5bf658471ce1fe90cd7189121058d1fec23a97202699de0ab189c854a988"} Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.457424 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.457452 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ecf5bf658471ce1fe90cd7189121058d1fec23a97202699de0ab189c854a988" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.460619 4676 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142" exitCode=0 Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.460698 4676 scope.go:117] "RemoveContainer" containerID="f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.460744 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.461551 4676 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.461885 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.463609 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.464023 4676 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.471153 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.471853 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.478410 4676 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.478786 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.479226 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.480842 4676 scope.go:117] "RemoveContainer" containerID="7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.495503 4676 scope.go:117] "RemoveContainer" containerID="229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.522120 4676 scope.go:117] "RemoveContainer" containerID="1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.536982 4676 scope.go:117] "RemoveContainer" containerID="85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.550997 4676 scope.go:117] "RemoveContainer" containerID="53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.581871 4676 scope.go:117] "RemoveContainer" containerID="f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a" Jan 24 00:07:36 crc kubenswrapper[4676]: E0124 00:07:36.582569 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\": container with ID starting with f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a not found: ID does not exist" containerID="f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.582613 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a"} err="failed to get container status \"f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\": rpc error: code = NotFound desc = could not find container \"f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a\": container with ID starting with f16d2dabd09c8f6638d8a22e94054b9edf5285e43e2c92c32684d973cb01f33a not found: ID does not exist" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.582649 4676 scope.go:117] "RemoveContainer" containerID="7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9" Jan 24 00:07:36 crc kubenswrapper[4676]: E0124 00:07:36.583946 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\": container with ID starting with 7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9 not found: ID does not exist" containerID="7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.583978 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9"} err="failed to get container status \"7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\": rpc error: code = NotFound desc = could not find container \"7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9\": container with ID starting with 7d29a6a014f64831d9a51bfe94c8dee076d996d3ae19a14b236d784d365757c9 not found: ID does not exist" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.584003 4676 scope.go:117] "RemoveContainer" containerID="229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa" Jan 24 00:07:36 crc kubenswrapper[4676]: E0124 00:07:36.584244 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\": container with ID starting with 229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa not found: ID does not exist" containerID="229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.584277 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa"} err="failed to get container status \"229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\": rpc error: code = NotFound desc = could not find container \"229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa\": container with ID starting with 229ff7cce0b4e89faae3c3c05ca116c9b91bd68ea3975e434f56778bef20f3aa not found: ID does not exist" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.584300 4676 scope.go:117] "RemoveContainer" containerID="1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b" Jan 24 00:07:36 crc kubenswrapper[4676]: E0124 00:07:36.584648 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\": container with ID starting with 1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b not found: ID does not exist" containerID="1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.584675 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b"} err="failed to get container status \"1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\": rpc error: code = NotFound desc = could not find container \"1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b\": container with ID starting with 1f7445ac3f25b14c2fa8ab85a43fb52bfe9a61373e1631b03ac314701897d57b not found: ID does not exist" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.584691 4676 scope.go:117] "RemoveContainer" containerID="85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142" Jan 24 00:07:36 crc kubenswrapper[4676]: E0124 00:07:36.584956 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\": container with ID starting with 85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142 not found: ID does not exist" containerID="85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.584976 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142"} err="failed to get container status \"85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\": rpc error: code = NotFound desc = could not find container \"85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142\": container with ID starting with 85fef114ba2a21932b6ff9a529a74938a6446ac89355fe1ab4b7d447194c4142 not found: ID does not exist" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.585023 4676 scope.go:117] "RemoveContainer" containerID="53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c" Jan 24 00:07:36 crc kubenswrapper[4676]: E0124 00:07:36.585418 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\": container with ID starting with 53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c not found: ID does not exist" containerID="53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c" Jan 24 00:07:36 crc kubenswrapper[4676]: I0124 00:07:36.585452 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c"} err="failed to get container status \"53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\": rpc error: code = NotFound desc = could not find container \"53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c\": container with ID starting with 53647b8b73c8af266a7c6abbd5af86c45dabbc3ccd4564adfcc4d41952518b4c not found: ID does not exist" Jan 24 00:07:38 crc kubenswrapper[4676]: I0124 00:07:38.266462 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 24 00:07:40 crc kubenswrapper[4676]: E0124 00:07:40.423044 4676 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:40 crc kubenswrapper[4676]: E0124 00:07:40.424086 4676 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:40 crc kubenswrapper[4676]: E0124 00:07:40.424353 4676 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:40 crc kubenswrapper[4676]: E0124 00:07:40.424758 4676 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:40 crc kubenswrapper[4676]: E0124 00:07:40.424923 4676 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:40 crc kubenswrapper[4676]: I0124 00:07:40.424947 4676 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 24 00:07:40 crc kubenswrapper[4676]: E0124 00:07:40.425089 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="200ms" Jan 24 00:07:40 crc kubenswrapper[4676]: E0124 00:07:40.626425 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="400ms" Jan 24 00:07:41 crc kubenswrapper[4676]: E0124 00:07:41.027999 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="800ms" Jan 24 00:07:41 crc kubenswrapper[4676]: E0124 00:07:41.829012 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="1.6s" Jan 24 00:07:42 crc kubenswrapper[4676]: E0124 00:07:42.348982 4676 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" volumeName="registry-storage" Jan 24 00:07:43 crc kubenswrapper[4676]: E0124 00:07:43.430549 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="3.2s" Jan 24 00:07:45 crc kubenswrapper[4676]: E0124 00:07:45.297357 4676 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d8214b6bb6799 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 00:07:34.103164825 +0000 UTC m=+238.133135856,LastTimestamp:2026-01-24 00:07:34.103164825 +0000 UTC m=+238.133135856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 00:07:46 crc kubenswrapper[4676]: I0124 00:07:46.257658 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:46 crc kubenswrapper[4676]: I0124 00:07:46.258006 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:46 crc kubenswrapper[4676]: E0124 00:07:46.632046 4676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="6.4s" Jan 24 00:07:47 crc kubenswrapper[4676]: I0124 00:07:47.255459 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:47 crc kubenswrapper[4676]: I0124 00:07:47.256768 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:47 crc kubenswrapper[4676]: I0124 00:07:47.257166 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:47 crc kubenswrapper[4676]: I0124 00:07:47.278049 4676 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:47 crc kubenswrapper[4676]: I0124 00:07:47.278109 4676 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:47 crc kubenswrapper[4676]: E0124 00:07:47.278599 4676 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:47 crc kubenswrapper[4676]: I0124 00:07:47.279232 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:47 crc kubenswrapper[4676]: I0124 00:07:47.530407 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4c6e9ec6352d1187d46c58912b6428fa13f7ab2ce0d3deb4c549df4b57bec338"} Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.539871 4676 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1b050fc2d0d0d17898728982ae6d475c683cf41b177d47ea2a2be452323d9f0b" exitCode=0 Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.540017 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1b050fc2d0d0d17898728982ae6d475c683cf41b177d47ea2a2be452323d9f0b"} Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.540312 4676 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.540348 4676 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.541683 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.542028 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:48 crc kubenswrapper[4676]: E0124 00:07:48.542097 4676 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.544905 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.544999 4676 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d" exitCode=1 Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.545052 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d"} Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.546157 4676 scope.go:117] "RemoveContainer" containerID="11b5a9331c7ed54da1e29daf0add6d4b15551929d37f1216b451e13b7d5ea94d" Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.546673 4676 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.547243 4676 status_manager.go:851] "Failed to get status for pod" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:48 crc kubenswrapper[4676]: I0124 00:07:48.547728 4676 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 24 00:07:49 crc kubenswrapper[4676]: I0124 00:07:49.551832 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9af73b52d1d7410a18d6167074ecb96f5501caa21121bdc68a8da60db3dcb9cd"} Jan 24 00:07:49 crc kubenswrapper[4676]: I0124 00:07:49.552700 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"60b6dde5f3cba077c5ac3a95a9c47d9a96a1d09924ec7d461c9e18975ec769d0"} Jan 24 00:07:49 crc kubenswrapper[4676]: I0124 00:07:49.552721 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c5fa1473a44a3285dfd2dbd329fc5ad732f136d14f10cb3a0d3211302b2a13d3"} Jan 24 00:07:49 crc kubenswrapper[4676]: I0124 00:07:49.554739 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 00:07:49 crc kubenswrapper[4676]: I0124 00:07:49.554813 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d7b9fc46f06e374dfc7e4e9a0230fb3fa43bf3f2fbe5d4102849e87dc273f97c"} Jan 24 00:07:49 crc kubenswrapper[4676]: I0124 00:07:49.632399 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:07:49 crc kubenswrapper[4676]: I0124 00:07:49.637041 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:07:50 crc kubenswrapper[4676]: I0124 00:07:50.562367 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cd15469e4299972ae731dcd1ab90f9776eb75f14f17e8bdd477b3ea8146a25c6"} Jan 24 00:07:50 crc kubenswrapper[4676]: I0124 00:07:50.562477 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e02a952e3df13febc6f27d226d50c7693b0e26800feaaecd6de3534fa13f4e1f"} Jan 24 00:07:50 crc kubenswrapper[4676]: I0124 00:07:50.562511 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:07:50 crc kubenswrapper[4676]: I0124 00:07:50.562650 4676 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:50 crc kubenswrapper[4676]: I0124 00:07:50.562672 4676 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:52 crc kubenswrapper[4676]: I0124 00:07:52.280069 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:52 crc kubenswrapper[4676]: I0124 00:07:52.280411 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:52 crc kubenswrapper[4676]: I0124 00:07:52.284948 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:55 crc kubenswrapper[4676]: I0124 00:07:55.636159 4676 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:56 crc kubenswrapper[4676]: I0124 00:07:56.268291 4676 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ca258140-5ed5-4682-9321-00fc4c664028" Jan 24 00:07:56 crc kubenswrapper[4676]: I0124 00:07:56.608452 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:07:56 crc kubenswrapper[4676]: I0124 00:07:56.608623 4676 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:56 crc kubenswrapper[4676]: I0124 00:07:56.608653 4676 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:56 crc kubenswrapper[4676]: I0124 00:07:56.614100 4676 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ca258140-5ed5-4682-9321-00fc4c664028" Jan 24 00:07:57 crc kubenswrapper[4676]: I0124 00:07:57.614147 4676 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:57 crc kubenswrapper[4676]: I0124 00:07:57.614193 4676 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="653e6c74-9f8e-4c5f-b101-5b8da2e962ca" Jan 24 00:07:57 crc kubenswrapper[4676]: I0124 00:07:57.617327 4676 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ca258140-5ed5-4682-9321-00fc4c664028" Jan 24 00:08:05 crc kubenswrapper[4676]: I0124 00:08:05.480194 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 00:08:05 crc kubenswrapper[4676]: I0124 00:08:05.706498 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 00:08:05 crc kubenswrapper[4676]: I0124 00:08:05.750858 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 00:08:05 crc kubenswrapper[4676]: I0124 00:08:05.915058 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 00:08:05 crc kubenswrapper[4676]: I0124 00:08:05.976854 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 00:08:06 crc kubenswrapper[4676]: I0124 00:08:06.131354 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 00:08:06 crc kubenswrapper[4676]: I0124 00:08:06.591610 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 00:08:06 crc kubenswrapper[4676]: I0124 00:08:06.837559 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 00:08:06 crc kubenswrapper[4676]: I0124 00:08:06.895118 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 00:08:06 crc kubenswrapper[4676]: I0124 00:08:06.901593 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.024844 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.442732 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.468990 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.555352 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.603633 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.761491 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.806526 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.812512 4676 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.863714 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.874001 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.918710 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 00:08:07 crc kubenswrapper[4676]: I0124 00:08:07.995696 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.002500 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.092976 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.115127 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.128915 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.289845 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.400954 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.505456 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.701655 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.730236 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.816472 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.827234 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.862148 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.889535 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.920011 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 00:08:08 crc kubenswrapper[4676]: I0124 00:08:08.995552 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.158591 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.241832 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.326991 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.345937 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.384184 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.404452 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.426790 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.491974 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.540021 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.543009 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.651227 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.657855 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.780498 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.798019 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.817121 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 00:08:09 crc kubenswrapper[4676]: I0124 00:08:09.878222 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.001122 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.062425 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.087315 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.154592 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.211603 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.337362 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.392972 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.423032 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.452922 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.489665 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.641879 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.642516 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.678657 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.733231 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.733977 4676 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.763513 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.767918 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.814825 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.864366 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.893451 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 00:08:10 crc kubenswrapper[4676]: I0124 00:08:10.938084 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.114636 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.231561 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.233206 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.240681 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.283851 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.289566 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.365613 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.401155 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.415408 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.423253 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.437930 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.463848 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.517527 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.632469 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.819813 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.840273 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.887432 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.919887 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.953952 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 00:08:11 crc kubenswrapper[4676]: I0124 00:08:11.955648 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.015104 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.036759 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.068044 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.089414 4676 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.114402 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.131213 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.160346 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.166997 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.195006 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.253985 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.308007 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.378565 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.447670 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.449290 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.537225 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.603108 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.608140 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.658170 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.740873 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.829414 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.911894 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.923937 4676 patch_prober.go:28] interesting pod/console-operator-58897d9998-x649d container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.924002 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-x649d" podUID="32de8698-4bd5-4154-92a3-76930504a72d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.924471 4676 patch_prober.go:28] interesting pod/console-operator-58897d9998-x649d container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.924546 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x649d" podUID="32de8698-4bd5-4154-92a3-76930504a72d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:08:12 crc kubenswrapper[4676]: I0124 00:08:12.981284 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.029680 4676 patch_prober.go:28] interesting pod/console-f9d7485db-g2smk container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.029758 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-g2smk" podUID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.033820 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.086934 4676 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n6vx5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.087123 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" podUID="fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.086903 4676 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n6vx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.087570 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" podUID="fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.113574 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.128634 4676 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-ft4kq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded" start-of-body= Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.128739 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-ft4kq" podUID="e6351c23-e315-4c92-a467-380da403d3c4" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.179961 4676 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.182748 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.182727886 podStartE2EDuration="40.182727886s" podCreationTimestamp="2026-01-24 00:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:07:55.670434198 +0000 UTC m=+259.700405229" watchObservedRunningTime="2026-01-24 00:08:13.182727886 +0000 UTC m=+277.212698887" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.185643 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.185700 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.194321 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.194900 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.208538 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.233488 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.234599 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.242215 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.337741 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.340214 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.340189025 podStartE2EDuration="18.340189025s" podCreationTimestamp="2026-01-24 00:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:08:13.217423137 +0000 UTC m=+277.247394138" watchObservedRunningTime="2026-01-24 00:08:13.340189025 +0000 UTC m=+277.370160026" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.418116 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.445598 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.561236 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.664903 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.730102 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.745617 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.763913 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.881895 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.956317 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 00:08:13 crc kubenswrapper[4676]: I0124 00:08:13.978271 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.223529 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.396971 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.543309 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.606128 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.615182 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.659423 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.696066 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.702371 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.771101 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 00:08:14 crc kubenswrapper[4676]: I0124 00:08:14.871113 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.056808 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.085440 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.151627 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.173998 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.280625 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.332215 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.360555 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.365230 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.399911 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.431502 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.442023 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.500341 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.515648 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.713034 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.718731 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.724720 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.849564 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 00:08:15 crc kubenswrapper[4676]: I0124 00:08:15.859038 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.022224 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.031474 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.193752 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.196757 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.258815 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.448582 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.545652 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.588203 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.626903 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.658888 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.677587 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.746415 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.807205 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.824206 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.890334 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 00:08:16 crc kubenswrapper[4676]: I0124 00:08:16.993601 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.082513 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.106112 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.107014 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.139152 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.204764 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.234214 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.249358 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.269403 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.272418 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.280011 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.306626 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.308040 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.326097 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.364205 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.386571 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.520932 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.741788 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.777114 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.793502 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.835005 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 00:08:17 crc kubenswrapper[4676]: I0124 00:08:17.916402 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:17.947995 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.050843 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.082542 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.108590 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.146603 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.183966 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.185454 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.262239 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.295521 4676 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.295861 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.295880 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://31446979da9b2814d80beb1cd72c15277eaaac69a2a2c9676d049a6a8acf10c5" gracePeriod=5 Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.491147 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.568722 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.651607 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.682943 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.708611 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.851103 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.914638 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.974243 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 00:08:18 crc kubenswrapper[4676]: I0124 00:08:18.981936 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.012699 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.092973 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.124096 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.163885 4676 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.197165 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.289569 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.293610 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.398417 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.467555 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.472722 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.493369 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.590874 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.608294 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.666355 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.676583 4676 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.680920 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.728448 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.729068 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.747541 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.941860 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.998007 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 00:08:19 crc kubenswrapper[4676]: I0124 00:08:19.998025 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 00:08:20 crc kubenswrapper[4676]: I0124 00:08:20.077928 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 00:08:20 crc kubenswrapper[4676]: I0124 00:08:20.131679 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 00:08:20 crc kubenswrapper[4676]: I0124 00:08:20.168624 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 00:08:20 crc kubenswrapper[4676]: I0124 00:08:20.368915 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 00:08:20 crc kubenswrapper[4676]: I0124 00:08:20.441033 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 00:08:20 crc kubenswrapper[4676]: I0124 00:08:20.597066 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 00:08:20 crc kubenswrapper[4676]: I0124 00:08:20.934503 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 00:08:21 crc kubenswrapper[4676]: I0124 00:08:21.289050 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 00:08:21 crc kubenswrapper[4676]: I0124 00:08:21.735730 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.240667 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.365472 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.497367 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.629394 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lxs47"] Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.629674 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lxs47" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerName="registry-server" containerID="cri-o://c53d6e9162fb162b724098a40f36dec02c2bb75d99128d9311568c19c9c217a4" gracePeriod=30 Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.634647 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnqjd"] Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.634999 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pnqjd" podUID="891b78f7-509c-4e8d-b846-52881396a64d" containerName="registry-server" containerID="cri-o://fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461" gracePeriod=30 Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.645924 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxmrn"] Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.646255 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxmrn" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerName="registry-server" containerID="cri-o://eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0" gracePeriod=30 Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.654064 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rsw66"] Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.654264 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" podUID="7af03157-ee92-4e72-a775-acaeabb73e65" containerName="marketplace-operator" containerID="cri-o://a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c" gracePeriod=30 Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.665945 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7nsm"] Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.666237 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l7nsm" podUID="920c325a-f36b-4162-9d37-ea88124be938" containerName="registry-server" containerID="cri-o://cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211" gracePeriod=30 Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.689331 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-645jr"] Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.689757 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-645jr" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="registry-server" containerID="cri-o://1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449" gracePeriod=30 Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.719742 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwptt"] Jan 24 00:08:22 crc kubenswrapper[4676]: E0124 00:08:22.719956 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" containerName="installer" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.719971 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" containerName="installer" Jan 24 00:08:22 crc kubenswrapper[4676]: E0124 00:08:22.719986 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.719995 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.720095 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="c280bb7c-e926-46c9-bb2c-d88c64634673" containerName="installer" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.720110 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.720541 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.732409 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwptt"] Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.740694 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 00:08:22 crc kubenswrapper[4676]: E0124 00:08:22.778428 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449 is running failed: container process not found" containerID="1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 00:08:22 crc kubenswrapper[4676]: E0124 00:08:22.779460 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449 is running failed: container process not found" containerID="1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 00:08:22 crc kubenswrapper[4676]: E0124 00:08:22.786794 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449 is running failed: container process not found" containerID="1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 00:08:22 crc kubenswrapper[4676]: E0124 00:08:22.786839 4676 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-645jr" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="registry-server" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.794691 4676 generic.go:334] "Generic (PLEG): container finished" podID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerID="c53d6e9162fb162b724098a40f36dec02c2bb75d99128d9311568c19c9c217a4" exitCode=0 Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.794746 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxs47" event={"ID":"592859d8-1f7e-4e35-acc9-635e130ad2d2","Type":"ContainerDied","Data":"c53d6e9162fb162b724098a40f36dec02c2bb75d99128d9311568c19c9c217a4"} Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.851820 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnqjd"] Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.922075 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzqgz\" (UniqueName: \"kubernetes.io/projected/8db31db6-2c7c-4688-89c7-328024cd7003-kube-api-access-zzqgz\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.922170 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8db31db6-2c7c-4688-89c7-328024cd7003-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:22 crc kubenswrapper[4676]: I0124 00:08:22.922260 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8db31db6-2c7c-4688-89c7-328024cd7003-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.023191 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzqgz\" (UniqueName: \"kubernetes.io/projected/8db31db6-2c7c-4688-89c7-328024cd7003-kube-api-access-zzqgz\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.023239 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8db31db6-2c7c-4688-89c7-328024cd7003-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.023428 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8db31db6-2c7c-4688-89c7-328024cd7003-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.024682 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8db31db6-2c7c-4688-89c7-328024cd7003-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.032960 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8db31db6-2c7c-4688-89c7-328024cd7003-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.038707 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzqgz\" (UniqueName: \"kubernetes.io/projected/8db31db6-2c7c-4688-89c7-328024cd7003-kube-api-access-zzqgz\") pod \"marketplace-operator-79b997595-kwptt\" (UID: \"8db31db6-2c7c-4688-89c7-328024cd7003\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.089554 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.179473 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.181173 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.185334 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.189555 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.194741 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.225187 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzqlz\" (UniqueName: \"kubernetes.io/projected/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-kube-api-access-hzqlz\") pod \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.226295 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-catalog-content\") pod \"592859d8-1f7e-4e35-acc9-635e130ad2d2\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.226425 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-catalog-content\") pod \"920c325a-f36b-4162-9d37-ea88124be938\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.226536 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g5kz\" (UniqueName: \"kubernetes.io/projected/10798203-b391-4a87-98a7-b41db2bbb0e2-kube-api-access-8g5kz\") pod \"10798203-b391-4a87-98a7-b41db2bbb0e2\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.227907 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-catalog-content\") pod \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.228066 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49sw6\" (UniqueName: \"kubernetes.io/projected/891b78f7-509c-4e8d-b846-52881396a64d-kube-api-access-49sw6\") pod \"891b78f7-509c-4e8d-b846-52881396a64d\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.228211 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4p5l\" (UniqueName: \"kubernetes.io/projected/920c325a-f36b-4162-9d37-ea88124be938-kube-api-access-n4p5l\") pod \"920c325a-f36b-4162-9d37-ea88124be938\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.228417 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-utilities\") pod \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\" (UID: \"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.228585 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjhrt\" (UniqueName: \"kubernetes.io/projected/592859d8-1f7e-4e35-acc9-635e130ad2d2-kube-api-access-tjhrt\") pod \"592859d8-1f7e-4e35-acc9-635e130ad2d2\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.228754 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics\") pod \"7af03157-ee92-4e72-a775-acaeabb73e65\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.229587 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-catalog-content\") pod \"891b78f7-509c-4e8d-b846-52881396a64d\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.229722 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-catalog-content\") pod \"10798203-b391-4a87-98a7-b41db2bbb0e2\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.229829 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-utilities\") pod \"920c325a-f36b-4162-9d37-ea88124be938\" (UID: \"920c325a-f36b-4162-9d37-ea88124be938\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.231334 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-utilities" (OuterVolumeSpecName: "utilities") pod "920c325a-f36b-4162-9d37-ea88124be938" (UID: "920c325a-f36b-4162-9d37-ea88124be938"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.234332 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-utilities" (OuterVolumeSpecName: "utilities") pod "0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" (UID: "0a4d7c63-cff0-4408-9cb6-450f3ebc53dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.239652 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10798203-b391-4a87-98a7-b41db2bbb0e2-kube-api-access-8g5kz" (OuterVolumeSpecName: "kube-api-access-8g5kz") pod "10798203-b391-4a87-98a7-b41db2bbb0e2" (UID: "10798203-b391-4a87-98a7-b41db2bbb0e2"). InnerVolumeSpecName "kube-api-access-8g5kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.245324 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891b78f7-509c-4e8d-b846-52881396a64d-kube-api-access-49sw6" (OuterVolumeSpecName: "kube-api-access-49sw6") pod "891b78f7-509c-4e8d-b846-52881396a64d" (UID: "891b78f7-509c-4e8d-b846-52881396a64d"). InnerVolumeSpecName "kube-api-access-49sw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.245572 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920c325a-f36b-4162-9d37-ea88124be938-kube-api-access-n4p5l" (OuterVolumeSpecName: "kube-api-access-n4p5l") pod "920c325a-f36b-4162-9d37-ea88124be938" (UID: "920c325a-f36b-4162-9d37-ea88124be938"). InnerVolumeSpecName "kube-api-access-n4p5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.245868 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7af03157-ee92-4e72-a775-acaeabb73e65" (UID: "7af03157-ee92-4e72-a775-acaeabb73e65"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.250048 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592859d8-1f7e-4e35-acc9-635e130ad2d2-kube-api-access-tjhrt" (OuterVolumeSpecName: "kube-api-access-tjhrt") pod "592859d8-1f7e-4e35-acc9-635e130ad2d2" (UID: "592859d8-1f7e-4e35-acc9-635e130ad2d2"). InnerVolumeSpecName "kube-api-access-tjhrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.256130 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-kube-api-access-hzqlz" (OuterVolumeSpecName: "kube-api-access-hzqlz") pod "0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" (UID: "0a4d7c63-cff0-4408-9cb6-450f3ebc53dd"). InnerVolumeSpecName "kube-api-access-hzqlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.279105 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "920c325a-f36b-4162-9d37-ea88124be938" (UID: "920c325a-f36b-4162-9d37-ea88124be938"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.290658 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.296619 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" (UID: "0a4d7c63-cff0-4408-9cb6-450f3ebc53dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.312271 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "592859d8-1f7e-4e35-acc9-635e130ad2d2" (UID: "592859d8-1f7e-4e35-acc9-635e130ad2d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.316279 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "891b78f7-509c-4e8d-b846-52881396a64d" (UID: "891b78f7-509c-4e8d-b846-52881396a64d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.330881 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-utilities\") pod \"592859d8-1f7e-4e35-acc9-635e130ad2d2\" (UID: \"592859d8-1f7e-4e35-acc9-635e130ad2d2\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.330932 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm4tl\" (UniqueName: \"kubernetes.io/projected/7af03157-ee92-4e72-a775-acaeabb73e65-kube-api-access-jm4tl\") pod \"7af03157-ee92-4e72-a775-acaeabb73e65\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.330953 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-utilities\") pod \"10798203-b391-4a87-98a7-b41db2bbb0e2\" (UID: \"10798203-b391-4a87-98a7-b41db2bbb0e2\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.330973 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-utilities\") pod \"891b78f7-509c-4e8d-b846-52881396a64d\" (UID: \"891b78f7-509c-4e8d-b846-52881396a64d\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331017 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca\") pod \"7af03157-ee92-4e72-a775-acaeabb73e65\" (UID: \"7af03157-ee92-4e72-a775-acaeabb73e65\") " Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331295 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzqlz\" (UniqueName: \"kubernetes.io/projected/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-kube-api-access-hzqlz\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331306 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331315 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331324 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g5kz\" (UniqueName: \"kubernetes.io/projected/10798203-b391-4a87-98a7-b41db2bbb0e2-kube-api-access-8g5kz\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331332 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331710 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49sw6\" (UniqueName: \"kubernetes.io/projected/891b78f7-509c-4e8d-b846-52881396a64d-kube-api-access-49sw6\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331721 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4p5l\" (UniqueName: \"kubernetes.io/projected/920c325a-f36b-4162-9d37-ea88124be938-kube-api-access-n4p5l\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331730 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331739 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjhrt\" (UniqueName: \"kubernetes.io/projected/592859d8-1f7e-4e35-acc9-635e130ad2d2-kube-api-access-tjhrt\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331747 4676 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331756 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.331765 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920c325a-f36b-4162-9d37-ea88124be938-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.333034 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-utilities" (OuterVolumeSpecName: "utilities") pod "592859d8-1f7e-4e35-acc9-635e130ad2d2" (UID: "592859d8-1f7e-4e35-acc9-635e130ad2d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.341145 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7af03157-ee92-4e72-a775-acaeabb73e65" (UID: "7af03157-ee92-4e72-a775-acaeabb73e65"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.341350 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-utilities" (OuterVolumeSpecName: "utilities") pod "10798203-b391-4a87-98a7-b41db2bbb0e2" (UID: "10798203-b391-4a87-98a7-b41db2bbb0e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.341859 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-utilities" (OuterVolumeSpecName: "utilities") pod "891b78f7-509c-4e8d-b846-52881396a64d" (UID: "891b78f7-509c-4e8d-b846-52881396a64d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.344563 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af03157-ee92-4e72-a775-acaeabb73e65-kube-api-access-jm4tl" (OuterVolumeSpecName: "kube-api-access-jm4tl") pod "7af03157-ee92-4e72-a775-acaeabb73e65" (UID: "7af03157-ee92-4e72-a775-acaeabb73e65"). InnerVolumeSpecName "kube-api-access-jm4tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.363470 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.412341 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10798203-b391-4a87-98a7-b41db2bbb0e2" (UID: "10798203-b391-4a87-98a7-b41db2bbb0e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.432183 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.432206 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592859d8-1f7e-4e35-acc9-635e130ad2d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.432217 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm4tl\" (UniqueName: \"kubernetes.io/projected/7af03157-ee92-4e72-a775-acaeabb73e65-kube-api-access-jm4tl\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.432227 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10798203-b391-4a87-98a7-b41db2bbb0e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.432236 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891b78f7-509c-4e8d-b846-52881396a64d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.432245 4676 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7af03157-ee92-4e72-a775-acaeabb73e65-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.491161 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwptt"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.803181 4676 generic.go:334] "Generic (PLEG): container finished" podID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerID="eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0" exitCode=0 Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.803237 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmrn" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.803287 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmrn" event={"ID":"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd","Type":"ContainerDied","Data":"eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.803348 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmrn" event={"ID":"0a4d7c63-cff0-4408-9cb6-450f3ebc53dd","Type":"ContainerDied","Data":"e7b1f0797950f02a9f5633d0b1bb88fc987a1236bb934934be7f6364e391819f"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.803387 4676 scope.go:117] "RemoveContainer" containerID="eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.806332 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" event={"ID":"8db31db6-2c7c-4688-89c7-328024cd7003","Type":"ContainerStarted","Data":"f747fa388ee96adbcd130ec185be58f0f23b427ed1052f15f08077aca9cb4bec"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.806362 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" event={"ID":"8db31db6-2c7c-4688-89c7-328024cd7003","Type":"ContainerStarted","Data":"4ab3c7b1ae83baac88eadf6edce3d31ceb98ea3b39880093b168a5c27926ed80"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.806566 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.809130 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxs47" event={"ID":"592859d8-1f7e-4e35-acc9-635e130ad2d2","Type":"ContainerDied","Data":"13ab7acc7e8b9f7cc9283df9a7fe0c7c12a31d5e17ec678008c5409224450c7c"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.809248 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxs47" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.813474 4676 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwptt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" start-of-body= Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.813527 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" podUID="8db31db6-2c7c-4688-89c7-328024cd7003" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.819595 4676 generic.go:334] "Generic (PLEG): container finished" podID="920c325a-f36b-4162-9d37-ea88124be938" containerID="cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211" exitCode=0 Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.819651 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7nsm" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.819669 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7nsm" event={"ID":"920c325a-f36b-4162-9d37-ea88124be938","Type":"ContainerDied","Data":"cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.819702 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7nsm" event={"ID":"920c325a-f36b-4162-9d37-ea88124be938","Type":"ContainerDied","Data":"c30184174638423d11a5d0c24917e087282eea0233650b891df20e30923de797"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.822511 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.822551 4676 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="31446979da9b2814d80beb1cd72c15277eaaac69a2a2c9676d049a6a8acf10c5" exitCode=137 Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.829251 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" podStartSLOduration=1.829235267 podStartE2EDuration="1.829235267s" podCreationTimestamp="2026-01-24 00:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:08:23.827639375 +0000 UTC m=+287.857610376" watchObservedRunningTime="2026-01-24 00:08:23.829235267 +0000 UTC m=+287.859206268" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.837702 4676 generic.go:334] "Generic (PLEG): container finished" podID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerID="1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449" exitCode=0 Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.837826 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645jr" event={"ID":"10798203-b391-4a87-98a7-b41db2bbb0e2","Type":"ContainerDied","Data":"1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.837874 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-645jr" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.837899 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645jr" event={"ID":"10798203-b391-4a87-98a7-b41db2bbb0e2","Type":"ContainerDied","Data":"bc7d200b5a89367114e48ce959705259b571653e291943569fe69329c0b533b4"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.844468 4676 generic.go:334] "Generic (PLEG): container finished" podID="7af03157-ee92-4e72-a775-acaeabb73e65" containerID="a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c" exitCode=0 Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.844565 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" event={"ID":"7af03157-ee92-4e72-a775-acaeabb73e65","Type":"ContainerDied","Data":"a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.844591 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" event={"ID":"7af03157-ee92-4e72-a775-acaeabb73e65","Type":"ContainerDied","Data":"3e3f107eb805996a69d5ec1a562a4e16c0b2393227f3f9858f21b04b163f17b3"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.844675 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rsw66" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.852661 4676 scope.go:117] "RemoveContainer" containerID="932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.877616 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxmrn"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.887790 4676 generic.go:334] "Generic (PLEG): container finished" podID="891b78f7-509c-4e8d-b846-52881396a64d" containerID="fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461" exitCode=0 Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.887954 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqjd" event={"ID":"891b78f7-509c-4e8d-b846-52881396a64d","Type":"ContainerDied","Data":"fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.888063 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqjd" event={"ID":"891b78f7-509c-4e8d-b846-52881396a64d","Type":"ContainerDied","Data":"fe1008d35989eb8d8f31d34ea59eda3fc8a16f8472a8718f92a963e88a25a092"} Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.888420 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnqjd" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.888823 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxmrn"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.898851 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lxs47"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.903446 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lxs47"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.909811 4676 scope.go:117] "RemoveContainer" containerID="da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.910246 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7nsm"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.914584 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7nsm"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.921394 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.921452 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.929253 4676 scope.go:117] "RemoveContainer" containerID="eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0" Jan 24 00:08:23 crc kubenswrapper[4676]: E0124 00:08:23.930080 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0\": container with ID starting with eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0 not found: ID does not exist" containerID="eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.930122 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0"} err="failed to get container status \"eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0\": rpc error: code = NotFound desc = could not find container \"eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0\": container with ID starting with eaf3e81af38d0f1a8b5ae4e7e29a2416e79e9d3282c5a5d4ac1c2360695efab0 not found: ID does not exist" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.930148 4676 scope.go:117] "RemoveContainer" containerID="932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c" Jan 24 00:08:23 crc kubenswrapper[4676]: E0124 00:08:23.930450 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c\": container with ID starting with 932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c not found: ID does not exist" containerID="932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.930471 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c"} err="failed to get container status \"932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c\": rpc error: code = NotFound desc = could not find container \"932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c\": container with ID starting with 932c49b1763ff02b2ff12475c95193c10bded4a7c1630e37d48894f5fd7a938c not found: ID does not exist" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.930485 4676 scope.go:117] "RemoveContainer" containerID="da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060" Jan 24 00:08:23 crc kubenswrapper[4676]: E0124 00:08:23.930773 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060\": container with ID starting with da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060 not found: ID does not exist" containerID="da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.930796 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060"} err="failed to get container status \"da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060\": rpc error: code = NotFound desc = could not find container \"da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060\": container with ID starting with da65e02cce8de032757324d83329948b4cdaba38f23cea059f4dfacc20dbe060 not found: ID does not exist" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.930919 4676 scope.go:117] "RemoveContainer" containerID="c53d6e9162fb162b724098a40f36dec02c2bb75d99128d9311568c19c9c217a4" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.931788 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-645jr"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.939534 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-645jr"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.947930 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnqjd"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.952430 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pnqjd"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.954742 4676 scope.go:117] "RemoveContainer" containerID="d84cf120443faf814099d3b57961bbb33e517c1b14f12a7cc4e82668a41edcc9" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.958299 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rsw66"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.961550 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rsw66"] Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.982619 4676 scope.go:117] "RemoveContainer" containerID="5f55960a6c743513af98906e24b7fac1e853e9cbe06b95180d6ac6353a01cca4" Jan 24 00:08:23 crc kubenswrapper[4676]: I0124 00:08:23.995866 4676 scope.go:117] "RemoveContainer" containerID="cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.010415 4676 scope.go:117] "RemoveContainer" containerID="c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.026151 4676 scope.go:117] "RemoveContainer" containerID="1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.037657 4676 scope.go:117] "RemoveContainer" containerID="cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.038402 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211\": container with ID starting with cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211 not found: ID does not exist" containerID="cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.038444 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211"} err="failed to get container status \"cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211\": rpc error: code = NotFound desc = could not find container \"cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211\": container with ID starting with cec35737af48fd560021306cb0c7587b6f69b93f483a188d615bbf073ab3c211 not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.038479 4676 scope.go:117] "RemoveContainer" containerID="c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.038894 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.038943 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.038970 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039016 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039050 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039102 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039271 4676 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039050 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039067 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039317 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.039386 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b\": container with ID starting with c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b not found: ID does not exist" containerID="c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039407 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b"} err="failed to get container status \"c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b\": rpc error: code = NotFound desc = could not find container \"c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b\": container with ID starting with c367a0daec440166091e52dccf8eb465443af958a90797e10fac9e024d33ce5b not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039427 4676 scope.go:117] "RemoveContainer" containerID="1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.039722 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3\": container with ID starting with 1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3 not found: ID does not exist" containerID="1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039745 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3"} err="failed to get container status \"1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3\": rpc error: code = NotFound desc = could not find container \"1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3\": container with ID starting with 1bdebd4e7e3f1f39e0a22ca3c50b2e8f9a237356bdbbca8ae01f28c88f7a72d3 not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.039758 4676 scope.go:117] "RemoveContainer" containerID="1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.044675 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.051193 4676 scope.go:117] "RemoveContainer" containerID="664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.063291 4676 scope.go:117] "RemoveContainer" containerID="eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.076162 4676 scope.go:117] "RemoveContainer" containerID="1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.076707 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449\": container with ID starting with 1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449 not found: ID does not exist" containerID="1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.076869 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449"} err="failed to get container status \"1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449\": rpc error: code = NotFound desc = could not find container \"1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449\": container with ID starting with 1884c81b387764c68fe7f812469ffb55cdc424867d5432e35c47580ec4c93449 not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.077041 4676 scope.go:117] "RemoveContainer" containerID="664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.077593 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233\": container with ID starting with 664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233 not found: ID does not exist" containerID="664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.077619 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233"} err="failed to get container status \"664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233\": rpc error: code = NotFound desc = could not find container \"664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233\": container with ID starting with 664b5bc200aa1b61de8d525097a5da8a9ecda54e783284807e9107a8e1a20233 not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.077634 4676 scope.go:117] "RemoveContainer" containerID="eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.077986 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0\": container with ID starting with eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0 not found: ID does not exist" containerID="eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.078128 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0"} err="failed to get container status \"eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0\": rpc error: code = NotFound desc = could not find container \"eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0\": container with ID starting with eb96ef854f131482318ba1e9541dfd7cdefbb362e6e310e27326cdd1db4c3ea0 not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.078257 4676 scope.go:117] "RemoveContainer" containerID="a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.093809 4676 scope.go:117] "RemoveContainer" containerID="a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.094228 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c\": container with ID starting with a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c not found: ID does not exist" containerID="a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.094256 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c"} err="failed to get container status \"a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c\": rpc error: code = NotFound desc = could not find container \"a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c\": container with ID starting with a20606a37ebc8d77d9e63ac6ed5eaa05b5b45e9454c3cc98154b304c79f1827c not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.094282 4676 scope.go:117] "RemoveContainer" containerID="fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.106696 4676 scope.go:117] "RemoveContainer" containerID="ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.118228 4676 scope.go:117] "RemoveContainer" containerID="5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.132115 4676 scope.go:117] "RemoveContainer" containerID="fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.132552 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461\": container with ID starting with fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461 not found: ID does not exist" containerID="fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.132650 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461"} err="failed to get container status \"fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461\": rpc error: code = NotFound desc = could not find container \"fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461\": container with ID starting with fba973515d63eaeb97481aaf2d27398145972a0521a5d48402281dfaca486461 not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.132731 4676 scope.go:117] "RemoveContainer" containerID="ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.133025 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903\": container with ID starting with ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903 not found: ID does not exist" containerID="ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.133050 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903"} err="failed to get container status \"ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903\": rpc error: code = NotFound desc = could not find container \"ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903\": container with ID starting with ec0016b4a3649cb82109f7041ce0d5dd5501d9ceb2f93c0c5e1205592a82d903 not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.133069 4676 scope.go:117] "RemoveContainer" containerID="5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0" Jan 24 00:08:24 crc kubenswrapper[4676]: E0124 00:08:24.133315 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0\": container with ID starting with 5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0 not found: ID does not exist" containerID="5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.133334 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0"} err="failed to get container status \"5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0\": rpc error: code = NotFound desc = could not find container \"5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0\": container with ID starting with 5f5e9794469ec06bd85502c50865410e6c1b4c331eb790f8073e4c59c7182bb0 not found: ID does not exist" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.140104 4676 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.140128 4676 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.140139 4676 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.140148 4676 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.261878 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" path="/var/lib/kubelet/pods/0a4d7c63-cff0-4408-9cb6-450f3ebc53dd/volumes" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.262669 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" path="/var/lib/kubelet/pods/10798203-b391-4a87-98a7-b41db2bbb0e2/volumes" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.263450 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" path="/var/lib/kubelet/pods/592859d8-1f7e-4e35-acc9-635e130ad2d2/volumes" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.264846 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af03157-ee92-4e72-a775-acaeabb73e65" path="/var/lib/kubelet/pods/7af03157-ee92-4e72-a775-acaeabb73e65/volumes" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.265540 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891b78f7-509c-4e8d-b846-52881396a64d" path="/var/lib/kubelet/pods/891b78f7-509c-4e8d-b846-52881396a64d/volumes" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.267118 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920c325a-f36b-4162-9d37-ea88124be938" path="/var/lib/kubelet/pods/920c325a-f36b-4162-9d37-ea88124be938/volumes" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.267790 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.268090 4676 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.279027 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.279250 4676 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0c058973-4f5f-43eb-a86b-a70d5b899ad6" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.281786 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.281804 4676 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0c058973-4f5f-43eb-a86b-a70d5b899ad6" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.906477 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.906590 4676 scope.go:117] "RemoveContainer" containerID="31446979da9b2814d80beb1cd72c15277eaaac69a2a2c9676d049a6a8acf10c5" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.906621 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 00:08:24 crc kubenswrapper[4676]: I0124 00:08:24.915870 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kwptt" Jan 24 00:08:36 crc kubenswrapper[4676]: I0124 00:08:36.124283 4676 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 24 00:08:47 crc kubenswrapper[4676]: I0124 00:08:47.355490 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qzmmz"] Jan 24 00:08:47 crc kubenswrapper[4676]: I0124 00:08:47.357925 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" podUID="b37a2847-a94a-4a0c-b092-1ed7155a2d35" containerName="controller-manager" containerID="cri-o://75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed" gracePeriod=30 Jan 24 00:08:47 crc kubenswrapper[4676]: I0124 00:08:47.454244 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5"] Jan 24 00:08:47 crc kubenswrapper[4676]: I0124 00:08:47.454903 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" podUID="fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" containerName="route-controller-manager" containerID="cri-o://f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa" gracePeriod=30 Jan 24 00:08:48 crc kubenswrapper[4676]: I0124 00:08:48.908385 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.011618 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.044887 4676 generic.go:334] "Generic (PLEG): container finished" podID="fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" containerID="f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa" exitCode=0 Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.044941 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" event={"ID":"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93","Type":"ContainerDied","Data":"f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa"} Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.045001 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" event={"ID":"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93","Type":"ContainerDied","Data":"caafeb67c7aefa20810226a8341333cb55d2bc2743763b95b6989d1d99cdf707"} Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.045027 4676 scope.go:117] "RemoveContainer" containerID="f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.045030 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.046758 4676 generic.go:334] "Generic (PLEG): container finished" podID="b37a2847-a94a-4a0c-b092-1ed7155a2d35" containerID="75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed" exitCode=0 Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.046801 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" event={"ID":"b37a2847-a94a-4a0c-b092-1ed7155a2d35","Type":"ContainerDied","Data":"75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed"} Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.046818 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" event={"ID":"b37a2847-a94a-4a0c-b092-1ed7155a2d35","Type":"ContainerDied","Data":"63dfdd35be2732ab824c6cbb0405da330ff5fefbf093c8f9f68db2d40a451da2"} Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.046833 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qzmmz" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.047583 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-config\") pod \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.047625 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-proxy-ca-bundles\") pod \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.047683 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv9mz\" (UniqueName: \"kubernetes.io/projected/b37a2847-a94a-4a0c-b092-1ed7155a2d35-kube-api-access-nv9mz\") pod \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.047711 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b37a2847-a94a-4a0c-b092-1ed7155a2d35-serving-cert\") pod \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.047766 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-client-ca\") pod \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\" (UID: \"b37a2847-a94a-4a0c-b092-1ed7155a2d35\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.048619 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-client-ca" (OuterVolumeSpecName: "client-ca") pod "b37a2847-a94a-4a0c-b092-1ed7155a2d35" (UID: "b37a2847-a94a-4a0c-b092-1ed7155a2d35"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.051802 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b37a2847-a94a-4a0c-b092-1ed7155a2d35" (UID: "b37a2847-a94a-4a0c-b092-1ed7155a2d35"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.054036 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-config" (OuterVolumeSpecName: "config") pod "b37a2847-a94a-4a0c-b092-1ed7155a2d35" (UID: "b37a2847-a94a-4a0c-b092-1ed7155a2d35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.054103 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37a2847-a94a-4a0c-b092-1ed7155a2d35-kube-api-access-nv9mz" (OuterVolumeSpecName: "kube-api-access-nv9mz") pod "b37a2847-a94a-4a0c-b092-1ed7155a2d35" (UID: "b37a2847-a94a-4a0c-b092-1ed7155a2d35"). InnerVolumeSpecName "kube-api-access-nv9mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.054734 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a2847-a94a-4a0c-b092-1ed7155a2d35-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b37a2847-a94a-4a0c-b092-1ed7155a2d35" (UID: "b37a2847-a94a-4a0c-b092-1ed7155a2d35"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.064447 4676 scope.go:117] "RemoveContainer" containerID="f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa" Jan 24 00:08:49 crc kubenswrapper[4676]: E0124 00:08:49.064841 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa\": container with ID starting with f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa not found: ID does not exist" containerID="f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.064878 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa"} err="failed to get container status \"f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa\": rpc error: code = NotFound desc = could not find container \"f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa\": container with ID starting with f4280b5e7bc19b43a820be872f8d3506b0ceb5bfbe0f82d0caa7fb857177ccaa not found: ID does not exist" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.064908 4676 scope.go:117] "RemoveContainer" containerID="75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.085710 4676 scope.go:117] "RemoveContainer" containerID="75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed" Jan 24 00:08:49 crc kubenswrapper[4676]: E0124 00:08:49.086331 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed\": container with ID starting with 75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed not found: ID does not exist" containerID="75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.086366 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed"} err="failed to get container status \"75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed\": rpc error: code = NotFound desc = could not find container \"75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed\": container with ID starting with 75da5a7bac808efea95403cb0f28f0e32a88fac01cca8335b0481d14b39d80ed not found: ID does not exist" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148497 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-client-ca\") pod \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148568 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-config\") pod \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148596 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4cw9\" (UniqueName: \"kubernetes.io/projected/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-kube-api-access-h4cw9\") pod \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148626 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-serving-cert\") pod \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\" (UID: \"fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93\") " Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148828 4676 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148840 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148850 4676 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b37a2847-a94a-4a0c-b092-1ed7155a2d35-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148861 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv9mz\" (UniqueName: \"kubernetes.io/projected/b37a2847-a94a-4a0c-b092-1ed7155a2d35-kube-api-access-nv9mz\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.148869 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b37a2847-a94a-4a0c-b092-1ed7155a2d35-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.149597 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-config" (OuterVolumeSpecName: "config") pod "fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" (UID: "fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.149594 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-client-ca" (OuterVolumeSpecName: "client-ca") pod "fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" (UID: "fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.152678 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" (UID: "fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.153667 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-kube-api-access-h4cw9" (OuterVolumeSpecName: "kube-api-access-h4cw9") pod "fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" (UID: "fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93"). InnerVolumeSpecName "kube-api-access-h4cw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.250326 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.250357 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4cw9\" (UniqueName: \"kubernetes.io/projected/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-kube-api-access-h4cw9\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.250398 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.250407 4676 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.393793 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qzmmz"] Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.402677 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qzmmz"] Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.405987 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5"] Jan 24 00:08:49 crc kubenswrapper[4676]: I0124 00:08:49.408739 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n6vx5"] Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.266435 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37a2847-a94a-4a0c-b092-1ed7155a2d35" path="/var/lib/kubelet/pods/b37a2847-a94a-4a0c-b092-1ed7155a2d35/volumes" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.268087 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" path="/var/lib/kubelet/pods/fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93/volumes" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373615 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9qdxp"] Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.373836 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891b78f7-509c-4e8d-b846-52881396a64d" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373850 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="891b78f7-509c-4e8d-b846-52881396a64d" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.373866 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373875 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.373887 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373896 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.373906 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891b78f7-509c-4e8d-b846-52881396a64d" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373915 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="891b78f7-509c-4e8d-b846-52881396a64d" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.373924 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373931 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.373943 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920c325a-f36b-4162-9d37-ea88124be938" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373952 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="920c325a-f36b-4162-9d37-ea88124be938" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.373962 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373971 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.373983 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" containerName="route-controller-manager" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.373993 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" containerName="route-controller-manager" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374007 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374014 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374026 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920c325a-f36b-4162-9d37-ea88124be938" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374033 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="920c325a-f36b-4162-9d37-ea88124be938" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374047 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af03157-ee92-4e72-a775-acaeabb73e65" containerName="marketplace-operator" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374055 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af03157-ee92-4e72-a775-acaeabb73e65" containerName="marketplace-operator" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374064 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920c325a-f36b-4162-9d37-ea88124be938" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374072 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="920c325a-f36b-4162-9d37-ea88124be938" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374083 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37a2847-a94a-4a0c-b092-1ed7155a2d35" containerName="controller-manager" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374091 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37a2847-a94a-4a0c-b092-1ed7155a2d35" containerName="controller-manager" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374102 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374109 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374120 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891b78f7-509c-4e8d-b846-52881396a64d" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374127 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="891b78f7-509c-4e8d-b846-52881396a64d" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374140 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374148 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374157 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374165 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="extract-content" Jan 24 00:08:50 crc kubenswrapper[4676]: E0124 00:08:50.374173 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374181 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerName="extract-utilities" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374278 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4d7c63-cff0-4408-9cb6-450f3ebc53dd" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374290 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37a2847-a94a-4a0c-b092-1ed7155a2d35" containerName="controller-manager" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374303 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="920c325a-f36b-4162-9d37-ea88124be938" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374316 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af03157-ee92-4e72-a775-acaeabb73e65" containerName="marketplace-operator" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374325 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="891b78f7-509c-4e8d-b846-52881396a64d" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374339 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="10798203-b391-4a87-98a7-b41db2bbb0e2" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374350 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4a6a13-8de6-4b4b-8bd7-3e2755e7cf93" containerName="route-controller-manager" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374360 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="592859d8-1f7e-4e35-acc9-635e130ad2d2" containerName="registry-server" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.374763 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.398298 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9qdxp"] Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.473325 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-bound-sa-token\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.473666 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/698d0139-4397-4dd2-bdc1-580717c2f177-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.473797 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvllg\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-kube-api-access-pvllg\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.473904 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/698d0139-4397-4dd2-bdc1-580717c2f177-registry-certificates\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.474006 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/698d0139-4397-4dd2-bdc1-580717c2f177-trusted-ca\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.474170 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.474321 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/698d0139-4397-4dd2-bdc1-580717c2f177-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.474496 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-registry-tls\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.500306 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.575462 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/698d0139-4397-4dd2-bdc1-580717c2f177-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.575506 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-registry-tls\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.575532 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-bound-sa-token\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.575566 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/698d0139-4397-4dd2-bdc1-580717c2f177-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.575590 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvllg\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-kube-api-access-pvllg\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.575612 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/698d0139-4397-4dd2-bdc1-580717c2f177-registry-certificates\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.575633 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/698d0139-4397-4dd2-bdc1-580717c2f177-trusted-ca\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.576511 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/698d0139-4397-4dd2-bdc1-580717c2f177-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.576904 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/698d0139-4397-4dd2-bdc1-580717c2f177-trusted-ca\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.577885 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/698d0139-4397-4dd2-bdc1-580717c2f177-registry-certificates\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.579747 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/698d0139-4397-4dd2-bdc1-580717c2f177-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.581626 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-registry-tls\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.594154 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-bound-sa-token\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.594590 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvllg\" (UniqueName: \"kubernetes.io/projected/698d0139-4397-4dd2-bdc1-580717c2f177-kube-api-access-pvllg\") pod \"image-registry-66df7c8f76-9qdxp\" (UID: \"698d0139-4397-4dd2-bdc1-580717c2f177\") " pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.686707 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7"] Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.687367 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.688554 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.693509 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.695248 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.695639 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.696022 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.696339 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.702098 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.713010 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-v7f69"] Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.713795 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.720454 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.720707 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.721937 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.722107 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.722305 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.724157 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7"] Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.724459 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.728238 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.747179 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-v7f69"] Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.778032 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-config\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.778085 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gd6z\" (UniqueName: \"kubernetes.io/projected/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-kube-api-access-8gd6z\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.778112 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-serving-cert\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.778168 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-client-ca\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.875277 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9qdxp"] Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.879521 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hnq\" (UniqueName: \"kubernetes.io/projected/3166691a-d300-4e14-a2ae-745fac50b3b4-kube-api-access-v5hnq\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.879565 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3166691a-d300-4e14-a2ae-745fac50b3b4-serving-cert\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.879593 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-client-ca\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.879754 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-client-ca\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.879856 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-proxy-ca-bundles\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.879913 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-config\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.879937 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-config\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.879973 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gd6z\" (UniqueName: \"kubernetes.io/projected/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-kube-api-access-8gd6z\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.880007 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-serving-cert\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.881207 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-client-ca\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.881415 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-config\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.894108 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-serving-cert\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.897117 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gd6z\" (UniqueName: \"kubernetes.io/projected/b7dc4ada-dd45-47fd-922a-903ba16f5b6f-kube-api-access-8gd6z\") pod \"route-controller-manager-5cd57b68f5-8z6p7\" (UID: \"b7dc4ada-dd45-47fd-922a-903ba16f5b6f\") " pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.981518 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-proxy-ca-bundles\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.981571 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-config\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.981607 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3166691a-d300-4e14-a2ae-745fac50b3b4-serving-cert\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.981624 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hnq\" (UniqueName: \"kubernetes.io/projected/3166691a-d300-4e14-a2ae-745fac50b3b4-kube-api-access-v5hnq\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.981646 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-client-ca\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.982444 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-client-ca\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.983534 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-proxy-ca-bundles\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.983601 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-config\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.986537 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3166691a-d300-4e14-a2ae-745fac50b3b4-serving-cert\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:50 crc kubenswrapper[4676]: I0124 00:08:50.998094 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hnq\" (UniqueName: \"kubernetes.io/projected/3166691a-d300-4e14-a2ae-745fac50b3b4-kube-api-access-v5hnq\") pod \"controller-manager-6567888b4c-v7f69\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.001002 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.037831 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.065398 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" event={"ID":"698d0139-4397-4dd2-bdc1-580717c2f177","Type":"ContainerStarted","Data":"92940ffc0d2cfca482946fb554a79d9370861525d9d58ba8db4994a3dc9d39cd"} Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.225393 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-v7f69"] Jan 24 00:08:51 crc kubenswrapper[4676]: W0124 00:08:51.230634 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3166691a_d300_4e14_a2ae_745fac50b3b4.slice/crio-37eb479d7ed3ff133df52eba62ac268cc9d612b8b14b6586faeb23724b08b838 WatchSource:0}: Error finding container 37eb479d7ed3ff133df52eba62ac268cc9d612b8b14b6586faeb23724b08b838: Status 404 returned error can't find the container with id 37eb479d7ed3ff133df52eba62ac268cc9d612b8b14b6586faeb23724b08b838 Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.384795 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7"] Jan 24 00:08:51 crc kubenswrapper[4676]: W0124 00:08:51.392352 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7dc4ada_dd45_47fd_922a_903ba16f5b6f.slice/crio-e066f82670a61a77c62917ec31f759a4319de1c12e3ea9dbf6cbe72af7361e7f WatchSource:0}: Error finding container e066f82670a61a77c62917ec31f759a4319de1c12e3ea9dbf6cbe72af7361e7f: Status 404 returned error can't find the container with id e066f82670a61a77c62917ec31f759a4319de1c12e3ea9dbf6cbe72af7361e7f Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.662498 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h6lxx"] Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.664541 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.670952 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.691830 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6lxx"] Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.792868 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdlrw\" (UniqueName: \"kubernetes.io/projected/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-kube-api-access-wdlrw\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.792921 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-utilities\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.792957 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-catalog-content\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.893824 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-catalog-content\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.894108 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdlrw\" (UniqueName: \"kubernetes.io/projected/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-kube-api-access-wdlrw\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.894135 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-utilities\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.894752 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-utilities\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.894908 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-catalog-content\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.919017 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdlrw\" (UniqueName: \"kubernetes.io/projected/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-kube-api-access-wdlrw\") pod \"certified-operators-h6lxx\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:51 crc kubenswrapper[4676]: I0124 00:08:51.997900 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.071211 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" event={"ID":"698d0139-4397-4dd2-bdc1-580717c2f177","Type":"ContainerStarted","Data":"454a2b3261dc96d1aa000c62b3271f6a19ae8d899e8d8d4d52f576846cb675e2"} Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.072100 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.073423 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" event={"ID":"3166691a-d300-4e14-a2ae-745fac50b3b4","Type":"ContainerStarted","Data":"17caedb43a800482408b315f9631ff9a96ef55fffba165823c14cc78155a7eef"} Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.073444 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" event={"ID":"3166691a-d300-4e14-a2ae-745fac50b3b4","Type":"ContainerStarted","Data":"37eb479d7ed3ff133df52eba62ac268cc9d612b8b14b6586faeb23724b08b838"} Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.073939 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.074768 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" event={"ID":"b7dc4ada-dd45-47fd-922a-903ba16f5b6f","Type":"ContainerStarted","Data":"4b28313a5d30610e5a876388bca48312314c883ec5de82b84586937fa888ecd3"} Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.074787 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" event={"ID":"b7dc4ada-dd45-47fd-922a-903ba16f5b6f","Type":"ContainerStarted","Data":"e066f82670a61a77c62917ec31f759a4319de1c12e3ea9dbf6cbe72af7361e7f"} Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.075184 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.087861 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.105558 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" podStartSLOduration=2.105543142 podStartE2EDuration="2.105543142s" podCreationTimestamp="2026-01-24 00:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:08:52.102486311 +0000 UTC m=+316.132457312" watchObservedRunningTime="2026-01-24 00:08:52.105543142 +0000 UTC m=+316.135514133" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.206729 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" podStartSLOduration=4.206714039 podStartE2EDuration="4.206714039s" podCreationTimestamp="2026-01-24 00:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:08:52.201276681 +0000 UTC m=+316.231247682" watchObservedRunningTime="2026-01-24 00:08:52.206714039 +0000 UTC m=+316.236685040" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.236472 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" podStartSLOduration=4.236450227 podStartE2EDuration="4.236450227s" podCreationTimestamp="2026-01-24 00:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:08:52.234643048 +0000 UTC m=+316.264614049" watchObservedRunningTime="2026-01-24 00:08:52.236450227 +0000 UTC m=+316.266421228" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.271563 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rjjnx"] Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.272519 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.284587 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.324712 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rjjnx"] Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.363584 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cd57b68f5-8z6p7" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.388349 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6lxx"] Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.406457 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/646456dc-35bc-4df2-8f92-55cdfefc6010-catalog-content\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.406521 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/646456dc-35bc-4df2-8f92-55cdfefc6010-utilities\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.406557 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5wp4\" (UniqueName: \"kubernetes.io/projected/646456dc-35bc-4df2-8f92-55cdfefc6010-kube-api-access-n5wp4\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.507388 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/646456dc-35bc-4df2-8f92-55cdfefc6010-utilities\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.507457 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5wp4\" (UniqueName: \"kubernetes.io/projected/646456dc-35bc-4df2-8f92-55cdfefc6010-kube-api-access-n5wp4\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.507514 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/646456dc-35bc-4df2-8f92-55cdfefc6010-catalog-content\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.507958 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/646456dc-35bc-4df2-8f92-55cdfefc6010-utilities\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.508573 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/646456dc-35bc-4df2-8f92-55cdfefc6010-catalog-content\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.524742 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5wp4\" (UniqueName: \"kubernetes.io/projected/646456dc-35bc-4df2-8f92-55cdfefc6010-kube-api-access-n5wp4\") pod \"community-operators-rjjnx\" (UID: \"646456dc-35bc-4df2-8f92-55cdfefc6010\") " pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.592931 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:08:52 crc kubenswrapper[4676]: I0124 00:08:52.993684 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rjjnx"] Jan 24 00:08:52 crc kubenswrapper[4676]: W0124 00:08:52.998999 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod646456dc_35bc_4df2_8f92_55cdfefc6010.slice/crio-873d12d25a978b1d381c40e12ad2f6a27ab8fbf2fce4ad20e07e4752dfe1ed65 WatchSource:0}: Error finding container 873d12d25a978b1d381c40e12ad2f6a27ab8fbf2fce4ad20e07e4752dfe1ed65: Status 404 returned error can't find the container with id 873d12d25a978b1d381c40e12ad2f6a27ab8fbf2fce4ad20e07e4752dfe1ed65 Jan 24 00:08:53 crc kubenswrapper[4676]: I0124 00:08:53.081109 4676 generic.go:334] "Generic (PLEG): container finished" podID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerID="e1126bba560d6ad086386fdbfde6ce098ca6ec45eb8ef89aa5c2a014e09ba3c2" exitCode=0 Jan 24 00:08:53 crc kubenswrapper[4676]: I0124 00:08:53.081170 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6lxx" event={"ID":"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec","Type":"ContainerDied","Data":"e1126bba560d6ad086386fdbfde6ce098ca6ec45eb8ef89aa5c2a014e09ba3c2"} Jan 24 00:08:53 crc kubenswrapper[4676]: I0124 00:08:53.081201 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6lxx" event={"ID":"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec","Type":"ContainerStarted","Data":"fb5783295da5666c028415e55625575a6eddcc0d1dfa67d4345fe77ee0fb6e6b"} Jan 24 00:08:53 crc kubenswrapper[4676]: I0124 00:08:53.082598 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjjnx" event={"ID":"646456dc-35bc-4df2-8f92-55cdfefc6010","Type":"ContainerStarted","Data":"873d12d25a978b1d381c40e12ad2f6a27ab8fbf2fce4ad20e07e4752dfe1ed65"} Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.058050 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vw7tg"] Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.059163 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.061158 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.072402 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw7tg"] Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.090080 4676 generic.go:334] "Generic (PLEG): container finished" podID="646456dc-35bc-4df2-8f92-55cdfefc6010" containerID="58c7ef0829c97d66dbcc2b639b255fe055d395775dc8bbd554aec915453da312" exitCode=0 Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.090151 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjjnx" event={"ID":"646456dc-35bc-4df2-8f92-55cdfefc6010","Type":"ContainerDied","Data":"58c7ef0829c97d66dbcc2b639b255fe055d395775dc8bbd554aec915453da312"} Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.133534 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c82da49-780b-431e-bfe7-d52ce3bcb623-catalog-content\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.133687 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmtt\" (UniqueName: \"kubernetes.io/projected/4c82da49-780b-431e-bfe7-d52ce3bcb623-kube-api-access-8rmtt\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.133825 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c82da49-780b-431e-bfe7-d52ce3bcb623-utilities\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.234563 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c82da49-780b-431e-bfe7-d52ce3bcb623-catalog-content\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.234682 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmtt\" (UniqueName: \"kubernetes.io/projected/4c82da49-780b-431e-bfe7-d52ce3bcb623-kube-api-access-8rmtt\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.234725 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c82da49-780b-431e-bfe7-d52ce3bcb623-utilities\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.235101 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c82da49-780b-431e-bfe7-d52ce3bcb623-utilities\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.235311 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c82da49-780b-431e-bfe7-d52ce3bcb623-catalog-content\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.258721 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmtt\" (UniqueName: \"kubernetes.io/projected/4c82da49-780b-431e-bfe7-d52ce3bcb623-kube-api-access-8rmtt\") pod \"redhat-marketplace-vw7tg\" (UID: \"4c82da49-780b-431e-bfe7-d52ce3bcb623\") " pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.375174 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.655583 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6mz4x"] Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.656752 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.659693 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.672731 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6mz4x"] Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.741935 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr7r2\" (UniqueName: \"kubernetes.io/projected/829774a4-dd19-462e-829c-f201bddf6886-kube-api-access-fr7r2\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.742228 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829774a4-dd19-462e-829c-f201bddf6886-catalog-content\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.742347 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829774a4-dd19-462e-829c-f201bddf6886-utilities\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.777664 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw7tg"] Jan 24 00:08:54 crc kubenswrapper[4676]: W0124 00:08:54.786761 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c82da49_780b_431e_bfe7_d52ce3bcb623.slice/crio-87779d34b0d4df9f6fb9a6f75cdce41fa7658d4a74055ee1cd77144d2add7d95 WatchSource:0}: Error finding container 87779d34b0d4df9f6fb9a6f75cdce41fa7658d4a74055ee1cd77144d2add7d95: Status 404 returned error can't find the container with id 87779d34b0d4df9f6fb9a6f75cdce41fa7658d4a74055ee1cd77144d2add7d95 Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.843770 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr7r2\" (UniqueName: \"kubernetes.io/projected/829774a4-dd19-462e-829c-f201bddf6886-kube-api-access-fr7r2\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.844030 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829774a4-dd19-462e-829c-f201bddf6886-catalog-content\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.844196 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829774a4-dd19-462e-829c-f201bddf6886-utilities\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.844576 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829774a4-dd19-462e-829c-f201bddf6886-catalog-content\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.844679 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829774a4-dd19-462e-829c-f201bddf6886-utilities\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.869717 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr7r2\" (UniqueName: \"kubernetes.io/projected/829774a4-dd19-462e-829c-f201bddf6886-kube-api-access-fr7r2\") pod \"redhat-operators-6mz4x\" (UID: \"829774a4-dd19-462e-829c-f201bddf6886\") " pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:54 crc kubenswrapper[4676]: I0124 00:08:54.972066 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:08:55 crc kubenswrapper[4676]: I0124 00:08:55.100663 4676 generic.go:334] "Generic (PLEG): container finished" podID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerID="52bc77f8ab557f140d976553f763b957c850be181430d480625497a667852cf5" exitCode=0 Jan 24 00:08:55 crc kubenswrapper[4676]: I0124 00:08:55.100873 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6lxx" event={"ID":"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec","Type":"ContainerDied","Data":"52bc77f8ab557f140d976553f763b957c850be181430d480625497a667852cf5"} Jan 24 00:08:55 crc kubenswrapper[4676]: I0124 00:08:55.113974 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw7tg" event={"ID":"4c82da49-780b-431e-bfe7-d52ce3bcb623","Type":"ContainerStarted","Data":"87779d34b0d4df9f6fb9a6f75cdce41fa7658d4a74055ee1cd77144d2add7d95"} Jan 24 00:08:55 crc kubenswrapper[4676]: I0124 00:08:55.378047 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6mz4x"] Jan 24 00:08:55 crc kubenswrapper[4676]: W0124 00:08:55.388639 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod829774a4_dd19_462e_829c_f201bddf6886.slice/crio-d40c66edb9d526f7eafa7bcd62806d639f76eae15baf3efd412242c5e14b1ac7 WatchSource:0}: Error finding container d40c66edb9d526f7eafa7bcd62806d639f76eae15baf3efd412242c5e14b1ac7: Status 404 returned error can't find the container with id d40c66edb9d526f7eafa7bcd62806d639f76eae15baf3efd412242c5e14b1ac7 Jan 24 00:08:56 crc kubenswrapper[4676]: I0124 00:08:56.120707 4676 generic.go:334] "Generic (PLEG): container finished" podID="829774a4-dd19-462e-829c-f201bddf6886" containerID="b4fa1a55d653742673b269e36abd70a8ffdf13c32586d0ad8c05867c492d81d6" exitCode=0 Jan 24 00:08:56 crc kubenswrapper[4676]: I0124 00:08:56.120820 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mz4x" event={"ID":"829774a4-dd19-462e-829c-f201bddf6886","Type":"ContainerDied","Data":"b4fa1a55d653742673b269e36abd70a8ffdf13c32586d0ad8c05867c492d81d6"} Jan 24 00:08:56 crc kubenswrapper[4676]: I0124 00:08:56.122406 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mz4x" event={"ID":"829774a4-dd19-462e-829c-f201bddf6886","Type":"ContainerStarted","Data":"d40c66edb9d526f7eafa7bcd62806d639f76eae15baf3efd412242c5e14b1ac7"} Jan 24 00:08:56 crc kubenswrapper[4676]: I0124 00:08:56.127431 4676 generic.go:334] "Generic (PLEG): container finished" podID="4c82da49-780b-431e-bfe7-d52ce3bcb623" containerID="2ad57c4829ee7e5ba8d86b54b074c291baf42d2492d0a4ade40e4ddd0a9e2c36" exitCode=0 Jan 24 00:08:56 crc kubenswrapper[4676]: I0124 00:08:56.127485 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw7tg" event={"ID":"4c82da49-780b-431e-bfe7-d52ce3bcb623","Type":"ContainerDied","Data":"2ad57c4829ee7e5ba8d86b54b074c291baf42d2492d0a4ade40e4ddd0a9e2c36"} Jan 24 00:08:57 crc kubenswrapper[4676]: I0124 00:08:57.135021 4676 generic.go:334] "Generic (PLEG): container finished" podID="646456dc-35bc-4df2-8f92-55cdfefc6010" containerID="6d56dc187e7b9a09f0626f71f550fffbc8a65054f1ea765e965463593c59a4f6" exitCode=0 Jan 24 00:08:57 crc kubenswrapper[4676]: I0124 00:08:57.135111 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjjnx" event={"ID":"646456dc-35bc-4df2-8f92-55cdfefc6010","Type":"ContainerDied","Data":"6d56dc187e7b9a09f0626f71f550fffbc8a65054f1ea765e965463593c59a4f6"} Jan 24 00:08:57 crc kubenswrapper[4676]: I0124 00:08:57.141573 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6lxx" event={"ID":"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec","Type":"ContainerStarted","Data":"08fb9c82277ed77127a24cba780debc02f8a59da1333b461232dac0199f577d3"} Jan 24 00:08:58 crc kubenswrapper[4676]: I0124 00:08:58.147925 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw7tg" event={"ID":"4c82da49-780b-431e-bfe7-d52ce3bcb623","Type":"ContainerStarted","Data":"b735d13a2378d0ba980128541eea2a3c9ffff1e2f4f3b214c7495548bc254a78"} Jan 24 00:08:58 crc kubenswrapper[4676]: I0124 00:08:58.150903 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mz4x" event={"ID":"829774a4-dd19-462e-829c-f201bddf6886","Type":"ContainerStarted","Data":"0eeb0c466ea082c5a0617505ab4895dc30f2bf85182ce4f980c155a9b92eaf6a"} Jan 24 00:08:59 crc kubenswrapper[4676]: I0124 00:08:59.159310 4676 generic.go:334] "Generic (PLEG): container finished" podID="829774a4-dd19-462e-829c-f201bddf6886" containerID="0eeb0c466ea082c5a0617505ab4895dc30f2bf85182ce4f980c155a9b92eaf6a" exitCode=0 Jan 24 00:08:59 crc kubenswrapper[4676]: I0124 00:08:59.159690 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mz4x" event={"ID":"829774a4-dd19-462e-829c-f201bddf6886","Type":"ContainerDied","Data":"0eeb0c466ea082c5a0617505ab4895dc30f2bf85182ce4f980c155a9b92eaf6a"} Jan 24 00:08:59 crc kubenswrapper[4676]: I0124 00:08:59.161799 4676 generic.go:334] "Generic (PLEG): container finished" podID="4c82da49-780b-431e-bfe7-d52ce3bcb623" containerID="b735d13a2378d0ba980128541eea2a3c9ffff1e2f4f3b214c7495548bc254a78" exitCode=0 Jan 24 00:08:59 crc kubenswrapper[4676]: I0124 00:08:59.161911 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw7tg" event={"ID":"4c82da49-780b-431e-bfe7-d52ce3bcb623","Type":"ContainerDied","Data":"b735d13a2378d0ba980128541eea2a3c9ffff1e2f4f3b214c7495548bc254a78"} Jan 24 00:08:59 crc kubenswrapper[4676]: I0124 00:08:59.168931 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjjnx" event={"ID":"646456dc-35bc-4df2-8f92-55cdfefc6010","Type":"ContainerStarted","Data":"bce9f6b37e559b5c43b32d751e038d3ab9cf0038b0ce24cfb67169da92057021"} Jan 24 00:08:59 crc kubenswrapper[4676]: I0124 00:08:59.187436 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h6lxx" podStartSLOduration=4.781868617 podStartE2EDuration="8.187416085s" podCreationTimestamp="2026-01-24 00:08:51 +0000 UTC" firstStartedPulling="2026-01-24 00:08:53.084167799 +0000 UTC m=+317.114138840" lastFinishedPulling="2026-01-24 00:08:56.489715307 +0000 UTC m=+320.519686308" observedRunningTime="2026-01-24 00:08:58.21250774 +0000 UTC m=+322.242478741" watchObservedRunningTime="2026-01-24 00:08:59.187416085 +0000 UTC m=+323.217387096" Jan 24 00:08:59 crc kubenswrapper[4676]: I0124 00:08:59.205596 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rjjnx" podStartSLOduration=3.10911072 podStartE2EDuration="7.205580802s" podCreationTimestamp="2026-01-24 00:08:52 +0000 UTC" firstStartedPulling="2026-01-24 00:08:54.092863375 +0000 UTC m=+318.122834376" lastFinishedPulling="2026-01-24 00:08:58.189333457 +0000 UTC m=+322.219304458" observedRunningTime="2026-01-24 00:08:59.204965951 +0000 UTC m=+323.234936972" watchObservedRunningTime="2026-01-24 00:08:59.205580802 +0000 UTC m=+323.235551803" Jan 24 00:09:02 crc kubenswrapper[4676]: I0124 00:09:01.999955 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:09:02 crc kubenswrapper[4676]: I0124 00:09:02.003490 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:09:02 crc kubenswrapper[4676]: I0124 00:09:02.118518 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:09:02 crc kubenswrapper[4676]: I0124 00:09:02.209698 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw7tg" event={"ID":"4c82da49-780b-431e-bfe7-d52ce3bcb623","Type":"ContainerStarted","Data":"557ff3d4bd760ab14450e8412163fa653f384dce00b97bc2753f06d8fb200fb8"} Jan 24 00:09:02 crc kubenswrapper[4676]: I0124 00:09:02.266525 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:09:02 crc kubenswrapper[4676]: I0124 00:09:02.593442 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:09:02 crc kubenswrapper[4676]: I0124 00:09:02.593508 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:09:02 crc kubenswrapper[4676]: I0124 00:09:02.651113 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:09:03 crc kubenswrapper[4676]: I0124 00:09:03.216427 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mz4x" event={"ID":"829774a4-dd19-462e-829c-f201bddf6886","Type":"ContainerStarted","Data":"1f4517eae223160755fe67ca42667d4b94c3389329951409164d7c368755eb3d"} Jan 24 00:09:03 crc kubenswrapper[4676]: I0124 00:09:03.234048 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vw7tg" podStartSLOduration=3.403564113 podStartE2EDuration="9.234033197s" podCreationTimestamp="2026-01-24 00:08:54 +0000 UTC" firstStartedPulling="2026-01-24 00:08:56.144401959 +0000 UTC m=+320.174372960" lastFinishedPulling="2026-01-24 00:09:01.974871003 +0000 UTC m=+326.004842044" observedRunningTime="2026-01-24 00:09:03.233512979 +0000 UTC m=+327.263484000" watchObservedRunningTime="2026-01-24 00:09:03.234033197 +0000 UTC m=+327.264004198" Jan 24 00:09:03 crc kubenswrapper[4676]: I0124 00:09:03.249345 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6mz4x" podStartSLOduration=3.405570291 podStartE2EDuration="9.24932428s" podCreationTimestamp="2026-01-24 00:08:54 +0000 UTC" firstStartedPulling="2026-01-24 00:08:56.124143714 +0000 UTC m=+320.154114735" lastFinishedPulling="2026-01-24 00:09:01.967897723 +0000 UTC m=+325.997868724" observedRunningTime="2026-01-24 00:09:03.247639614 +0000 UTC m=+327.277610625" watchObservedRunningTime="2026-01-24 00:09:03.24932428 +0000 UTC m=+327.279295291" Jan 24 00:09:03 crc kubenswrapper[4676]: I0124 00:09:03.262020 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rjjnx" Jan 24 00:09:04 crc kubenswrapper[4676]: I0124 00:09:04.375557 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:09:04 crc kubenswrapper[4676]: I0124 00:09:04.375630 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:09:04 crc kubenswrapper[4676]: I0124 00:09:04.436959 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:09:04 crc kubenswrapper[4676]: I0124 00:09:04.973028 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:09:04 crc kubenswrapper[4676]: I0124 00:09:04.973585 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:09:06 crc kubenswrapper[4676]: I0124 00:09:06.011060 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6mz4x" podUID="829774a4-dd19-462e-829c-f201bddf6886" containerName="registry-server" probeResult="failure" output=< Jan 24 00:09:06 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:09:06 crc kubenswrapper[4676]: > Jan 24 00:09:10 crc kubenswrapper[4676]: I0124 00:09:10.701276 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9qdxp" Jan 24 00:09:10 crc kubenswrapper[4676]: I0124 00:09:10.782556 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-shf2w"] Jan 24 00:09:14 crc kubenswrapper[4676]: I0124 00:09:14.432968 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vw7tg" Jan 24 00:09:15 crc kubenswrapper[4676]: I0124 00:09:15.033789 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:09:15 crc kubenswrapper[4676]: I0124 00:09:15.095449 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6mz4x" Jan 24 00:09:35 crc kubenswrapper[4676]: I0124 00:09:35.847606 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" podUID="9887557b-81eb-4651-8da2-fd34d7b0be97" containerName="registry" containerID="cri-o://38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472" gracePeriod=30 Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.224259 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.408288 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-certificates\") pod \"9887557b-81eb-4651-8da2-fd34d7b0be97\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.408343 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9887557b-81eb-4651-8da2-fd34d7b0be97-ca-trust-extracted\") pod \"9887557b-81eb-4651-8da2-fd34d7b0be97\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.408419 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-trusted-ca\") pod \"9887557b-81eb-4651-8da2-fd34d7b0be97\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.408577 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9887557b-81eb-4651-8da2-fd34d7b0be97\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.408643 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlknz\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-kube-api-access-hlknz\") pod \"9887557b-81eb-4651-8da2-fd34d7b0be97\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.408683 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9887557b-81eb-4651-8da2-fd34d7b0be97-installation-pull-secrets\") pod \"9887557b-81eb-4651-8da2-fd34d7b0be97\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.408729 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-tls\") pod \"9887557b-81eb-4651-8da2-fd34d7b0be97\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.408755 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-bound-sa-token\") pod \"9887557b-81eb-4651-8da2-fd34d7b0be97\" (UID: \"9887557b-81eb-4651-8da2-fd34d7b0be97\") " Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.409274 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9887557b-81eb-4651-8da2-fd34d7b0be97" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.410007 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9887557b-81eb-4651-8da2-fd34d7b0be97" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.417809 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-kube-api-access-hlknz" (OuterVolumeSpecName: "kube-api-access-hlknz") pod "9887557b-81eb-4651-8da2-fd34d7b0be97" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97"). InnerVolumeSpecName "kube-api-access-hlknz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.417916 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9887557b-81eb-4651-8da2-fd34d7b0be97-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9887557b-81eb-4651-8da2-fd34d7b0be97" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.418034 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9887557b-81eb-4651-8da2-fd34d7b0be97" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.423047 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9887557b-81eb-4651-8da2-fd34d7b0be97" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.423872 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9887557b-81eb-4651-8da2-fd34d7b0be97" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.437407 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9887557b-81eb-4651-8da2-fd34d7b0be97-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9887557b-81eb-4651-8da2-fd34d7b0be97" (UID: "9887557b-81eb-4651-8da2-fd34d7b0be97"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.450744 4676 generic.go:334] "Generic (PLEG): container finished" podID="9887557b-81eb-4651-8da2-fd34d7b0be97" containerID="38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472" exitCode=0 Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.450826 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.450856 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" event={"ID":"9887557b-81eb-4651-8da2-fd34d7b0be97","Type":"ContainerDied","Data":"38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472"} Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.451191 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-shf2w" event={"ID":"9887557b-81eb-4651-8da2-fd34d7b0be97","Type":"ContainerDied","Data":"927b1ecf53efd6bf7705450ed6ba3d508cc6d33832a7f941121086cc971d4adb"} Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.451218 4676 scope.go:117] "RemoveContainer" containerID="38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.484313 4676 scope.go:117] "RemoveContainer" containerID="38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.484821 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-shf2w"] Jan 24 00:09:36 crc kubenswrapper[4676]: E0124 00:09:36.485192 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472\": container with ID starting with 38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472 not found: ID does not exist" containerID="38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.485239 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472"} err="failed to get container status \"38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472\": rpc error: code = NotFound desc = could not find container \"38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472\": container with ID starting with 38b74903d521a57e98f715c4b02dad7881bee2134fc7d9ceec4368f1ee47f472 not found: ID does not exist" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.488665 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-shf2w"] Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.509684 4676 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.509713 4676 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.509723 4676 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.509733 4676 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9887557b-81eb-4651-8da2-fd34d7b0be97-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.509744 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9887557b-81eb-4651-8da2-fd34d7b0be97-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.509754 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlknz\" (UniqueName: \"kubernetes.io/projected/9887557b-81eb-4651-8da2-fd34d7b0be97-kube-api-access-hlknz\") on node \"crc\" DevicePath \"\"" Jan 24 00:09:36 crc kubenswrapper[4676]: I0124 00:09:36.509764 4676 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9887557b-81eb-4651-8da2-fd34d7b0be97-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 00:09:38 crc kubenswrapper[4676]: I0124 00:09:38.263988 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9887557b-81eb-4651-8da2-fd34d7b0be97" path="/var/lib/kubelet/pods/9887557b-81eb-4651-8da2-fd34d7b0be97/volumes" Jan 24 00:09:39 crc kubenswrapper[4676]: I0124 00:09:39.363972 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:09:39 crc kubenswrapper[4676]: I0124 00:09:39.364031 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.351958 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-v7f69"] Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.353046 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" podUID="3166691a-d300-4e14-a2ae-745fac50b3b4" containerName="controller-manager" containerID="cri-o://17caedb43a800482408b315f9631ff9a96ef55fffba165823c14cc78155a7eef" gracePeriod=30 Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.627822 4676 generic.go:334] "Generic (PLEG): container finished" podID="3166691a-d300-4e14-a2ae-745fac50b3b4" containerID="17caedb43a800482408b315f9631ff9a96ef55fffba165823c14cc78155a7eef" exitCode=0 Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.627893 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" event={"ID":"3166691a-d300-4e14-a2ae-745fac50b3b4","Type":"ContainerDied","Data":"17caedb43a800482408b315f9631ff9a96ef55fffba165823c14cc78155a7eef"} Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.718449 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.842478 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5hnq\" (UniqueName: \"kubernetes.io/projected/3166691a-d300-4e14-a2ae-745fac50b3b4-kube-api-access-v5hnq\") pod \"3166691a-d300-4e14-a2ae-745fac50b3b4\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.842572 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-config\") pod \"3166691a-d300-4e14-a2ae-745fac50b3b4\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.842619 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-proxy-ca-bundles\") pod \"3166691a-d300-4e14-a2ae-745fac50b3b4\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.842652 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3166691a-d300-4e14-a2ae-745fac50b3b4-serving-cert\") pod \"3166691a-d300-4e14-a2ae-745fac50b3b4\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.842686 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-client-ca\") pod \"3166691a-d300-4e14-a2ae-745fac50b3b4\" (UID: \"3166691a-d300-4e14-a2ae-745fac50b3b4\") " Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.843766 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "3166691a-d300-4e14-a2ae-745fac50b3b4" (UID: "3166691a-d300-4e14-a2ae-745fac50b3b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.843812 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3166691a-d300-4e14-a2ae-745fac50b3b4" (UID: "3166691a-d300-4e14-a2ae-745fac50b3b4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.846001 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-config" (OuterVolumeSpecName: "config") pod "3166691a-d300-4e14-a2ae-745fac50b3b4" (UID: "3166691a-d300-4e14-a2ae-745fac50b3b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.848042 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3166691a-d300-4e14-a2ae-745fac50b3b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3166691a-d300-4e14-a2ae-745fac50b3b4" (UID: "3166691a-d300-4e14-a2ae-745fac50b3b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.848513 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3166691a-d300-4e14-a2ae-745fac50b3b4-kube-api-access-v5hnq" (OuterVolumeSpecName: "kube-api-access-v5hnq") pod "3166691a-d300-4e14-a2ae-745fac50b3b4" (UID: "3166691a-d300-4e14-a2ae-745fac50b3b4"). InnerVolumeSpecName "kube-api-access-v5hnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.943775 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5hnq\" (UniqueName: \"kubernetes.io/projected/3166691a-d300-4e14-a2ae-745fac50b3b4-kube-api-access-v5hnq\") on node \"crc\" DevicePath \"\"" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.943809 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.943824 4676 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.943834 4676 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3166691a-d300-4e14-a2ae-745fac50b3b4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:10:07 crc kubenswrapper[4676]: I0124 00:10:07.943846 4676 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3166691a-d300-4e14-a2ae-745fac50b3b4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.637506 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" event={"ID":"3166691a-d300-4e14-a2ae-745fac50b3b4","Type":"ContainerDied","Data":"37eb479d7ed3ff133df52eba62ac268cc9d612b8b14b6586faeb23724b08b838"} Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.637663 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567888b4c-v7f69" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.639014 4676 scope.go:117] "RemoveContainer" containerID="17caedb43a800482408b315f9631ff9a96ef55fffba165823c14cc78155a7eef" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.661279 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-v7f69"] Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.665366 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-v7f69"] Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.742719 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-765455979b-gkrv4"] Jan 24 00:10:08 crc kubenswrapper[4676]: E0124 00:10:08.743108 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9887557b-81eb-4651-8da2-fd34d7b0be97" containerName="registry" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.743140 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9887557b-81eb-4651-8da2-fd34d7b0be97" containerName="registry" Jan 24 00:10:08 crc kubenswrapper[4676]: E0124 00:10:08.743171 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3166691a-d300-4e14-a2ae-745fac50b3b4" containerName="controller-manager" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.743186 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3166691a-d300-4e14-a2ae-745fac50b3b4" containerName="controller-manager" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.743362 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3166691a-d300-4e14-a2ae-745fac50b3b4" containerName="controller-manager" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.743423 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9887557b-81eb-4651-8da2-fd34d7b0be97" containerName="registry" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.744007 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.746780 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.747044 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.747127 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.747281 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.747769 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.755183 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.758350 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-765455979b-gkrv4"] Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.775037 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.855350 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-config\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.855433 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-client-ca\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.855469 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkk4\" (UniqueName: \"kubernetes.io/projected/444c268d-a28d-4b7b-a1a3-6bbe0792653e-kube-api-access-8jkk4\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.855495 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444c268d-a28d-4b7b-a1a3-6bbe0792653e-serving-cert\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.855529 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-proxy-ca-bundles\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.957549 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-config\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.957743 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-client-ca\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.957835 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkk4\" (UniqueName: \"kubernetes.io/projected/444c268d-a28d-4b7b-a1a3-6bbe0792653e-kube-api-access-8jkk4\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.957896 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444c268d-a28d-4b7b-a1a3-6bbe0792653e-serving-cert\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.957957 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-proxy-ca-bundles\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.959994 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-proxy-ca-bundles\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.960992 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-config\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.962322 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444c268d-a28d-4b7b-a1a3-6bbe0792653e-client-ca\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.966124 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444c268d-a28d-4b7b-a1a3-6bbe0792653e-serving-cert\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:08 crc kubenswrapper[4676]: I0124 00:10:08.990518 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkk4\" (UniqueName: \"kubernetes.io/projected/444c268d-a28d-4b7b-a1a3-6bbe0792653e-kube-api-access-8jkk4\") pod \"controller-manager-765455979b-gkrv4\" (UID: \"444c268d-a28d-4b7b-a1a3-6bbe0792653e\") " pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:09 crc kubenswrapper[4676]: I0124 00:10:09.077751 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:09 crc kubenswrapper[4676]: I0124 00:10:09.364203 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:10:09 crc kubenswrapper[4676]: I0124 00:10:09.364580 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:10:09 crc kubenswrapper[4676]: I0124 00:10:09.551424 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-765455979b-gkrv4"] Jan 24 00:10:09 crc kubenswrapper[4676]: I0124 00:10:09.646173 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" event={"ID":"444c268d-a28d-4b7b-a1a3-6bbe0792653e","Type":"ContainerStarted","Data":"699cf66899ea5853552f1ca6f0ef46e8661aad000af615b9c0e74305c336f797"} Jan 24 00:10:10 crc kubenswrapper[4676]: I0124 00:10:10.263633 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3166691a-d300-4e14-a2ae-745fac50b3b4" path="/var/lib/kubelet/pods/3166691a-d300-4e14-a2ae-745fac50b3b4/volumes" Jan 24 00:10:10 crc kubenswrapper[4676]: I0124 00:10:10.654469 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" event={"ID":"444c268d-a28d-4b7b-a1a3-6bbe0792653e","Type":"ContainerStarted","Data":"967d2fd4b40dab65607f024038c708140924dc24afa407932443cf8620caadf3"} Jan 24 00:10:10 crc kubenswrapper[4676]: I0124 00:10:10.655206 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:10 crc kubenswrapper[4676]: I0124 00:10:10.662277 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" Jan 24 00:10:10 crc kubenswrapper[4676]: I0124 00:10:10.682178 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-765455979b-gkrv4" podStartSLOduration=3.682139464 podStartE2EDuration="3.682139464s" podCreationTimestamp="2026-01-24 00:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:10:10.67422713 +0000 UTC m=+394.704198141" watchObservedRunningTime="2026-01-24 00:10:10.682139464 +0000 UTC m=+394.712110515" Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.364803 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.365456 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.365527 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.366304 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a2899a2bd48b07d11a54988ae27807805689a40bb41df1e769ca4da39298f4c"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.366420 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://6a2899a2bd48b07d11a54988ae27807805689a40bb41df1e769ca4da39298f4c" gracePeriod=600 Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.848808 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="6a2899a2bd48b07d11a54988ae27807805689a40bb41df1e769ca4da39298f4c" exitCode=0 Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.848928 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"6a2899a2bd48b07d11a54988ae27807805689a40bb41df1e769ca4da39298f4c"} Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.849225 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"51936651fd255c79ecefd9c199d1a0336083a72c956e86d582874060fc907470"} Jan 24 00:10:39 crc kubenswrapper[4676]: I0124 00:10:39.849254 4676 scope.go:117] "RemoveContainer" containerID="9bf2fa5fb75b902d85e89d6ce3189bb1074a855a03752ec7f4fd03195945544d" Jan 24 00:12:39 crc kubenswrapper[4676]: I0124 00:12:39.364298 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:12:39 crc kubenswrapper[4676]: I0124 00:12:39.365595 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:13:09 crc kubenswrapper[4676]: I0124 00:13:09.365178 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:13:09 crc kubenswrapper[4676]: I0124 00:13:09.365842 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:13:39 crc kubenswrapper[4676]: I0124 00:13:39.364593 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:13:39 crc kubenswrapper[4676]: I0124 00:13:39.365244 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:13:39 crc kubenswrapper[4676]: I0124 00:13:39.365308 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:13:39 crc kubenswrapper[4676]: I0124 00:13:39.366344 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51936651fd255c79ecefd9c199d1a0336083a72c956e86d582874060fc907470"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:13:39 crc kubenswrapper[4676]: I0124 00:13:39.366632 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://51936651fd255c79ecefd9c199d1a0336083a72c956e86d582874060fc907470" gracePeriod=600 Jan 24 00:13:40 crc kubenswrapper[4676]: I0124 00:13:40.165962 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="51936651fd255c79ecefd9c199d1a0336083a72c956e86d582874060fc907470" exitCode=0 Jan 24 00:13:40 crc kubenswrapper[4676]: I0124 00:13:40.166503 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"51936651fd255c79ecefd9c199d1a0336083a72c956e86d582874060fc907470"} Jan 24 00:13:40 crc kubenswrapper[4676]: I0124 00:13:40.166535 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"daf4dfd81dc7faee8c5a37cce872ffde5731f2d91708788dd42d2993fec18ba6"} Jan 24 00:13:40 crc kubenswrapper[4676]: I0124 00:13:40.166566 4676 scope.go:117] "RemoveContainer" containerID="6a2899a2bd48b07d11a54988ae27807805689a40bb41df1e769ca4da39298f4c" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.632459 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s27fj"] Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.634196 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s27fj" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.637499 4676 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-nt8rp" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.637900 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.638181 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.666051 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s27fj"] Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.674340 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-jws6v"] Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.675039 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jws6v" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.678663 4676 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lzhxd" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.690216 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jws6v"] Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.697711 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-w7kb7"] Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.698321 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.705881 4676 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4p4cg" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.708311 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-w7kb7"] Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.755084 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbvq\" (UniqueName: \"kubernetes.io/projected/1b51f181-5a85-4c07-b259-f67d17bf1134-kube-api-access-ckbvq\") pod \"cert-manager-858654f9db-jws6v\" (UID: \"1b51f181-5a85-4c07-b259-f67d17bf1134\") " pod="cert-manager/cert-manager-858654f9db-jws6v" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.755163 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n5fk\" (UniqueName: \"kubernetes.io/projected/6db681a1-6165-4404-81a9-a189e9b30bfd-kube-api-access-5n5fk\") pod \"cert-manager-cainjector-cf98fcc89-s27fj\" (UID: \"6db681a1-6165-4404-81a9-a189e9b30bfd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s27fj" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.856560 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n5fk\" (UniqueName: \"kubernetes.io/projected/6db681a1-6165-4404-81a9-a189e9b30bfd-kube-api-access-5n5fk\") pod \"cert-manager-cainjector-cf98fcc89-s27fj\" (UID: \"6db681a1-6165-4404-81a9-a189e9b30bfd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s27fj" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.856689 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbvq\" (UniqueName: \"kubernetes.io/projected/1b51f181-5a85-4c07-b259-f67d17bf1134-kube-api-access-ckbvq\") pod \"cert-manager-858654f9db-jws6v\" (UID: \"1b51f181-5a85-4c07-b259-f67d17bf1134\") " pod="cert-manager/cert-manager-858654f9db-jws6v" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.856826 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk42x\" (UniqueName: \"kubernetes.io/projected/8f43788b-d559-4623-ae87-a820a2f23b08-kube-api-access-qk42x\") pod \"cert-manager-webhook-687f57d79b-w7kb7\" (UID: \"8f43788b-d559-4623-ae87-a820a2f23b08\") " pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.878231 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbvq\" (UniqueName: \"kubernetes.io/projected/1b51f181-5a85-4c07-b259-f67d17bf1134-kube-api-access-ckbvq\") pod \"cert-manager-858654f9db-jws6v\" (UID: \"1b51f181-5a85-4c07-b259-f67d17bf1134\") " pod="cert-manager/cert-manager-858654f9db-jws6v" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.880172 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n5fk\" (UniqueName: \"kubernetes.io/projected/6db681a1-6165-4404-81a9-a189e9b30bfd-kube-api-access-5n5fk\") pod \"cert-manager-cainjector-cf98fcc89-s27fj\" (UID: \"6db681a1-6165-4404-81a9-a189e9b30bfd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s27fj" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.958319 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk42x\" (UniqueName: \"kubernetes.io/projected/8f43788b-d559-4623-ae87-a820a2f23b08-kube-api-access-qk42x\") pod \"cert-manager-webhook-687f57d79b-w7kb7\" (UID: \"8f43788b-d559-4623-ae87-a820a2f23b08\") " pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.966169 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s27fj" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.974712 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk42x\" (UniqueName: \"kubernetes.io/projected/8f43788b-d559-4623-ae87-a820a2f23b08-kube-api-access-qk42x\") pod \"cert-manager-webhook-687f57d79b-w7kb7\" (UID: \"8f43788b-d559-4623-ae87-a820a2f23b08\") " pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" Jan 24 00:14:02 crc kubenswrapper[4676]: I0124 00:14:02.997666 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jws6v" Jan 24 00:14:03 crc kubenswrapper[4676]: I0124 00:14:03.013987 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" Jan 24 00:14:03 crc kubenswrapper[4676]: I0124 00:14:03.226547 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s27fj"] Jan 24 00:14:03 crc kubenswrapper[4676]: I0124 00:14:03.240917 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:14:03 crc kubenswrapper[4676]: I0124 00:14:03.250226 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jws6v"] Jan 24 00:14:03 crc kubenswrapper[4676]: W0124 00:14:03.297216 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f43788b_d559_4623_ae87_a820a2f23b08.slice/crio-a36ce2c976d9ee609443c3d2f4e362184a38e380dfc12f6bcfac87fa11a03815 WatchSource:0}: Error finding container a36ce2c976d9ee609443c3d2f4e362184a38e380dfc12f6bcfac87fa11a03815: Status 404 returned error can't find the container with id a36ce2c976d9ee609443c3d2f4e362184a38e380dfc12f6bcfac87fa11a03815 Jan 24 00:14:03 crc kubenswrapper[4676]: I0124 00:14:03.297472 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-w7kb7"] Jan 24 00:14:03 crc kubenswrapper[4676]: I0124 00:14:03.315876 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" event={"ID":"8f43788b-d559-4623-ae87-a820a2f23b08","Type":"ContainerStarted","Data":"a36ce2c976d9ee609443c3d2f4e362184a38e380dfc12f6bcfac87fa11a03815"} Jan 24 00:14:03 crc kubenswrapper[4676]: I0124 00:14:03.320726 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s27fj" event={"ID":"6db681a1-6165-4404-81a9-a189e9b30bfd","Type":"ContainerStarted","Data":"dab150e50614a7fcfffb7fde04ee0c5eb4c3aa33cf6329ff198bf4b3cd33a5e0"} Jan 24 00:14:03 crc kubenswrapper[4676]: I0124 00:14:03.321802 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jws6v" event={"ID":"1b51f181-5a85-4c07-b259-f67d17bf1134","Type":"ContainerStarted","Data":"6e36bcc528299e3c444b1a5b51b721248634e9711d864189eb9cafb21be8688d"} Jan 24 00:14:08 crc kubenswrapper[4676]: I0124 00:14:08.369072 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" event={"ID":"8f43788b-d559-4623-ae87-a820a2f23b08","Type":"ContainerStarted","Data":"01173b1b3bc11b38321967d4f8e929aa6578c9495d169dccbf72bc4d4657304d"} Jan 24 00:14:08 crc kubenswrapper[4676]: I0124 00:14:08.370760 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" Jan 24 00:14:08 crc kubenswrapper[4676]: I0124 00:14:08.372472 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s27fj" event={"ID":"6db681a1-6165-4404-81a9-a189e9b30bfd","Type":"ContainerStarted","Data":"53d6adf12612b975323d2b343cc3fc52b20c87621e4243827c8eb1ead8c1cab5"} Jan 24 00:14:08 crc kubenswrapper[4676]: I0124 00:14:08.375947 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jws6v" event={"ID":"1b51f181-5a85-4c07-b259-f67d17bf1134","Type":"ContainerStarted","Data":"cc992135d2b376d78c382c95906e57272a00be1e59b87b4c440d6a4a84d13153"} Jan 24 00:14:08 crc kubenswrapper[4676]: I0124 00:14:08.398675 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" podStartSLOduration=2.349557902 podStartE2EDuration="6.398631643s" podCreationTimestamp="2026-01-24 00:14:02 +0000 UTC" firstStartedPulling="2026-01-24 00:14:03.300043045 +0000 UTC m=+627.330014046" lastFinishedPulling="2026-01-24 00:14:07.349116796 +0000 UTC m=+631.379087787" observedRunningTime="2026-01-24 00:14:08.392835946 +0000 UTC m=+632.422806947" watchObservedRunningTime="2026-01-24 00:14:08.398631643 +0000 UTC m=+632.428602694" Jan 24 00:14:08 crc kubenswrapper[4676]: I0124 00:14:08.425335 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s27fj" podStartSLOduration=3.864665225 podStartE2EDuration="6.425299935s" podCreationTimestamp="2026-01-24 00:14:02 +0000 UTC" firstStartedPulling="2026-01-24 00:14:03.240510361 +0000 UTC m=+627.270481362" lastFinishedPulling="2026-01-24 00:14:05.801145031 +0000 UTC m=+629.831116072" observedRunningTime="2026-01-24 00:14:08.424158691 +0000 UTC m=+632.454129692" watchObservedRunningTime="2026-01-24 00:14:08.425299935 +0000 UTC m=+632.455270976" Jan 24 00:14:08 crc kubenswrapper[4676]: I0124 00:14:08.469914 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-jws6v" podStartSLOduration=2.384312211 podStartE2EDuration="6.469877614s" podCreationTimestamp="2026-01-24 00:14:02 +0000 UTC" firstStartedPulling="2026-01-24 00:14:03.264361988 +0000 UTC m=+627.294332989" lastFinishedPulling="2026-01-24 00:14:07.349927391 +0000 UTC m=+631.379898392" observedRunningTime="2026-01-24 00:14:08.449996598 +0000 UTC m=+632.479967599" watchObservedRunningTime="2026-01-24 00:14:08.469877614 +0000 UTC m=+632.499848625" Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.544152 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ld569"] Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.545368 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovn-controller" containerID="cri-o://a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468" gracePeriod=30 Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.545483 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="nbdb" containerID="cri-o://0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58" gracePeriod=30 Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.545552 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="sbdb" containerID="cri-o://a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624" gracePeriod=30 Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.545622 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7" gracePeriod=30 Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.545683 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kube-rbac-proxy-node" containerID="cri-o://02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc" gracePeriod=30 Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.545743 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovn-acl-logging" containerID="cri-o://3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd" gracePeriod=30 Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.545599 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="northd" containerID="cri-o://d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb" gracePeriod=30 Jan 24 00:14:12 crc kubenswrapper[4676]: I0124 00:14:12.604818 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" containerID="cri-o://b097984a7c57abb09d2fc1362781982b54bf19474644ee1742c87013905c7faf" gracePeriod=30 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.017566 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-w7kb7" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.412846 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/2.log" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.414172 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/1.log" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.414220 4676 generic.go:334] "Generic (PLEG): container finished" podID="b88e9d2e-35da-45a8-ac7e-22afd660ff9f" containerID="503448e193566525ada0f32c12c8a2978a0f18fbc763208a99e7e6534727cec5" exitCode=2 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.414299 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x57xf" event={"ID":"b88e9d2e-35da-45a8-ac7e-22afd660ff9f","Type":"ContainerDied","Data":"503448e193566525ada0f32c12c8a2978a0f18fbc763208a99e7e6534727cec5"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.414424 4676 scope.go:117] "RemoveContainer" containerID="cf92889c765992ceabf09d2de008fbbbfc1dc097012d57ce03aafee751eb759b" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.414803 4676 scope.go:117] "RemoveContainer" containerID="503448e193566525ada0f32c12c8a2978a0f18fbc763208a99e7e6534727cec5" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.414997 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-x57xf_openshift-multus(b88e9d2e-35da-45a8-ac7e-22afd660ff9f)\"" pod="openshift-multus/multus-x57xf" podUID="b88e9d2e-35da-45a8-ac7e-22afd660ff9f" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.419670 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/3.log" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.426604 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovn-acl-logging/0.log" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427234 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovn-controller/0.log" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427778 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="b097984a7c57abb09d2fc1362781982b54bf19474644ee1742c87013905c7faf" exitCode=0 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427817 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624" exitCode=0 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427835 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58" exitCode=0 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427852 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb" exitCode=0 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427851 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"b097984a7c57abb09d2fc1362781982b54bf19474644ee1742c87013905c7faf"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427909 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427928 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427945 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427968 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.427867 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7" exitCode=0 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.428002 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc" exitCode=0 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.428022 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd" exitCode=143 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.428033 4676 generic.go:334] "Generic (PLEG): container finished" podID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerID="a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468" exitCode=143 Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.428054 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.428068 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.428082 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468"} Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.439680 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovnkube-controller/3.log" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.444566 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovn-acl-logging/0.log" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.445904 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovn-controller/0.log" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.446548 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.485890 4676 scope.go:117] "RemoveContainer" containerID="dc41985bef5146e5e21b5354222ebacb6310fa940511f64524b296d99bbd73e9" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534266 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c2dqg"] Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534612 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovn-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534630 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovn-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534642 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534649 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534659 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovn-acl-logging" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534667 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovn-acl-logging" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534677 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534683 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534694 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534700 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534717 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kubecfg-setup" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534725 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kubecfg-setup" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534733 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="sbdb" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534741 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="sbdb" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534754 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534762 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534770 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="nbdb" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534776 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="nbdb" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534787 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kube-rbac-proxy-node" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534794 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kube-rbac-proxy-node" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.534802 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="northd" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534807 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="northd" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534955 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="sbdb" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534972 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="northd" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534988 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.534997 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535006 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535017 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535030 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovn-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535039 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="kube-rbac-proxy-node" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535048 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535058 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovn-acl-logging" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535068 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535080 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="nbdb" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.535214 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535224 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: E0124 00:14:13.535234 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.535240 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" containerName="ovnkube-controller" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.537421 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.624971 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625024 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-openvswitch\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625053 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-kubelet\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625089 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4frqp\" (UniqueName: \"kubernetes.io/projected/24f0dc26-0857-430f-aebd-073fcfcc1c0a-kube-api-access-4frqp\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625126 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-ovn\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625141 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-log-socket\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625177 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-script-lib\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625152 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625198 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-var-lib-openvswitch\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625215 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-slash\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625241 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-bin\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625212 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625274 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625285 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovn-node-metrics-cert\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625246 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625313 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-systemd\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625334 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-etc-openvswitch\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625344 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625364 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-netns\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625403 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-node-log\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625432 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-env-overrides\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625456 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-ovn-kubernetes\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625477 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-systemd-units\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625505 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-netd\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625538 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-config\") pod \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\" (UID: \"24f0dc26-0857-430f-aebd-073fcfcc1c0a\") " Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625675 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-systemd\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625714 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-slash\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625733 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625757 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-cni-netd\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625773 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpx9m\" (UniqueName: \"kubernetes.io/projected/b082801b-819b-4d2d-835b-38a93b997fce-kube-api-access-hpx9m\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625791 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-kubelet\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625810 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625829 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-systemd-units\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625851 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-ovnkube-config\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625875 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-ovn\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625905 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625926 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-cni-bin\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625941 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-env-overrides\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625959 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-etc-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625984 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b082801b-819b-4d2d-835b-38a93b997fce-ovn-node-metrics-cert\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626002 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-var-lib-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626029 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-node-log\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626051 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-ovnkube-script-lib\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626073 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-run-netns\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626092 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-log-socket\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626129 4676 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626140 4676 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626152 4676 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626164 4676 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626176 4676 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.625402 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-log-socket" (OuterVolumeSpecName: "log-socket") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626137 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626191 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-slash" (OuterVolumeSpecName: "host-slash") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626214 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626653 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626669 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626698 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626718 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626808 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626844 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.626905 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-node-log" (OuterVolumeSpecName: "node-log") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.627087 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.632944 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f0dc26-0857-430f-aebd-073fcfcc1c0a-kube-api-access-4frqp" (OuterVolumeSpecName: "kube-api-access-4frqp") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "kube-api-access-4frqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.634163 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.641624 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "24f0dc26-0857-430f-aebd-073fcfcc1c0a" (UID: "24f0dc26-0857-430f-aebd-073fcfcc1c0a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727318 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-cni-netd\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727474 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-cni-netd\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727493 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727556 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpx9m\" (UniqueName: \"kubernetes.io/projected/b082801b-819b-4d2d-835b-38a93b997fce-kube-api-access-hpx9m\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727579 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-kubelet\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727585 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727606 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727633 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-systemd-units\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727650 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-kubelet\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727670 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-ovnkube-config\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727703 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727720 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-ovn\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727745 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727753 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-systemd-units\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727776 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-cni-bin\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727794 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-etc-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727800 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-ovn\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727814 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-env-overrides\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727853 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b082801b-819b-4d2d-835b-38a93b997fce-ovn-node-metrics-cert\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727878 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-var-lib-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727947 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-node-log\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.727998 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-ovnkube-script-lib\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728034 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-run-netns\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728062 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-log-socket\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728095 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-systemd\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728141 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-slash\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728199 4676 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728211 4676 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728221 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4frqp\" (UniqueName: \"kubernetes.io/projected/24f0dc26-0857-430f-aebd-073fcfcc1c0a-kube-api-access-4frqp\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728231 4676 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-log-socket\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728240 4676 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728249 4676 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-slash\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728256 4676 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728268 4676 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24f0dc26-0857-430f-aebd-073fcfcc1c0a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728276 4676 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728285 4676 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728294 4676 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728302 4676 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-node-log\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728311 4676 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24f0dc26-0857-430f-aebd-073fcfcc1c0a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728320 4676 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728330 4676 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/24f0dc26-0857-430f-aebd-073fcfcc1c0a-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728356 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-slash\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728418 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728449 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-cni-bin\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728476 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-etc-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728592 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-var-lib-openvswitch\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728796 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-log-socket\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728876 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-host-run-netns\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728917 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-run-systemd\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.728934 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b082801b-819b-4d2d-835b-38a93b997fce-node-log\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.729021 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-ovnkube-config\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.729098 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-env-overrides\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.729418 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b082801b-819b-4d2d-835b-38a93b997fce-ovnkube-script-lib\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.734122 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b082801b-819b-4d2d-835b-38a93b997fce-ovn-node-metrics-cert\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.751724 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpx9m\" (UniqueName: \"kubernetes.io/projected/b082801b-819b-4d2d-835b-38a93b997fce-kube-api-access-hpx9m\") pod \"ovnkube-node-c2dqg\" (UID: \"b082801b-819b-4d2d-835b-38a93b997fce\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:13 crc kubenswrapper[4676]: I0124 00:14:13.854359 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.439984 4676 generic.go:334] "Generic (PLEG): container finished" podID="b082801b-819b-4d2d-835b-38a93b997fce" containerID="4fed15ba15b6e85c16ab1b2784cfd203e0efc654b51c36133af8d2ef84d4becb" exitCode=0 Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.440414 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerDied","Data":"4fed15ba15b6e85c16ab1b2784cfd203e0efc654b51c36133af8d2ef84d4becb"} Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.440480 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"daa79d40639e60df9f72aa02df9f92b56c6e10ba2812c60726eaf037482092c7"} Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.446215 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/2.log" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.454354 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovn-acl-logging/0.log" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.455198 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ld569_24f0dc26-0857-430f-aebd-073fcfcc1c0a/ovn-controller/0.log" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.456061 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" event={"ID":"24f0dc26-0857-430f-aebd-073fcfcc1c0a","Type":"ContainerDied","Data":"a4ecac14869f9e61b6cf328f1a06fe9b463abccd1352f43143e709699b48fbd7"} Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.456126 4676 scope.go:117] "RemoveContainer" containerID="b097984a7c57abb09d2fc1362781982b54bf19474644ee1742c87013905c7faf" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.456363 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ld569" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.485129 4676 scope.go:117] "RemoveContainer" containerID="a1ccf0f4689bd5b3f634716a02e2c504e9cf4a1ebf5d95d06e1726133f4b2624" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.523351 4676 scope.go:117] "RemoveContainer" containerID="0c11b0bf64a540088ea316e492b601758f34a1a11e78622c5c084804b7213c58" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.569880 4676 scope.go:117] "RemoveContainer" containerID="d5580f29ee3a76e3eb08133c85be9d6a05b2738b900cd45b31c4fff775dab9bb" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.580191 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ld569"] Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.584449 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ld569"] Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.590123 4676 scope.go:117] "RemoveContainer" containerID="97d78e2b53d638374e7271129c05aa5b21f56dbc2abd4213f314f5a9220ad3c7" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.608019 4676 scope.go:117] "RemoveContainer" containerID="02d161fdcbf3861e821058380babbbf7ebb6a5929199df6285c046bed8d4d9cc" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.625106 4676 scope.go:117] "RemoveContainer" containerID="3878ebc66d4bc240cb14e18ed0dd1d1a06e65b3340a42aa0a54d70b5225422dd" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.651978 4676 scope.go:117] "RemoveContainer" containerID="a3d45db1c4c5e94ab3d5c20fc015bf49cce8f4306d7a56bead500d7ea13bb468" Jan 24 00:14:14 crc kubenswrapper[4676]: I0124 00:14:14.671678 4676 scope.go:117] "RemoveContainer" containerID="c59614eb0966d467422d52077d3fcb569d7c66e7b1ce142a7c2b3a548c315551" Jan 24 00:14:15 crc kubenswrapper[4676]: I0124 00:14:15.466733 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"4bbb3007fa5db6b08b3a2e059bbdcabd98a314f78f1b8bd2145c9c8679a3c131"} Jan 24 00:14:15 crc kubenswrapper[4676]: I0124 00:14:15.467218 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"0019b481bcecfa427cbcbd8601b9929cba76fc95252a31658dc859a343bf16f1"} Jan 24 00:14:15 crc kubenswrapper[4676]: I0124 00:14:15.467233 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"31df2e60c29211f17eb1cb0a277d02f598cb618a745a4067410dd3d8a32b0283"} Jan 24 00:14:15 crc kubenswrapper[4676]: I0124 00:14:15.467244 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"e8c2615fd559f2b8bbe3f170b11362efef09bf752021d8c0c06f5930919dcea9"} Jan 24 00:14:15 crc kubenswrapper[4676]: I0124 00:14:15.467305 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"57ca41b08691a5e2d17bac2255d196d1a86559a93b43a71f6fc2d04a24f8fade"} Jan 24 00:14:15 crc kubenswrapper[4676]: I0124 00:14:15.467316 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"6088a3bd55a04f084e276574e48ff5ad3268ac83bd6ff1b2e3eead89794d8df0"} Jan 24 00:14:16 crc kubenswrapper[4676]: I0124 00:14:16.263533 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f0dc26-0857-430f-aebd-073fcfcc1c0a" path="/var/lib/kubelet/pods/24f0dc26-0857-430f-aebd-073fcfcc1c0a/volumes" Jan 24 00:14:18 crc kubenswrapper[4676]: I0124 00:14:18.494519 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"92fd32939e1ff79efddb46e5b0b57cbd61f5e788a9337e2c5b375b985959b6a3"} Jan 24 00:14:20 crc kubenswrapper[4676]: I0124 00:14:20.511198 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" event={"ID":"b082801b-819b-4d2d-835b-38a93b997fce","Type":"ContainerStarted","Data":"d408ca0bb7091ab554b81d1909a3d5528818046f1e5197886616b8b1040f98f8"} Jan 24 00:14:20 crc kubenswrapper[4676]: I0124 00:14:20.511812 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:20 crc kubenswrapper[4676]: I0124 00:14:20.511946 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:20 crc kubenswrapper[4676]: I0124 00:14:20.555916 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:20 crc kubenswrapper[4676]: I0124 00:14:20.613128 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" podStartSLOduration=7.613110871 podStartE2EDuration="7.613110871s" podCreationTimestamp="2026-01-24 00:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:14:20.569306407 +0000 UTC m=+644.599277418" watchObservedRunningTime="2026-01-24 00:14:20.613110871 +0000 UTC m=+644.643081882" Jan 24 00:14:21 crc kubenswrapper[4676]: I0124 00:14:21.520703 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:21 crc kubenswrapper[4676]: I0124 00:14:21.566507 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:26 crc kubenswrapper[4676]: I0124 00:14:26.260538 4676 scope.go:117] "RemoveContainer" containerID="503448e193566525ada0f32c12c8a2978a0f18fbc763208a99e7e6534727cec5" Jan 24 00:14:26 crc kubenswrapper[4676]: E0124 00:14:26.261665 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-x57xf_openshift-multus(b88e9d2e-35da-45a8-ac7e-22afd660ff9f)\"" pod="openshift-multus/multus-x57xf" podUID="b88e9d2e-35da-45a8-ac7e-22afd660ff9f" Jan 24 00:14:39 crc kubenswrapper[4676]: I0124 00:14:39.255642 4676 scope.go:117] "RemoveContainer" containerID="503448e193566525ada0f32c12c8a2978a0f18fbc763208a99e7e6534727cec5" Jan 24 00:14:39 crc kubenswrapper[4676]: I0124 00:14:39.657742 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x57xf_b88e9d2e-35da-45a8-ac7e-22afd660ff9f/kube-multus/2.log" Jan 24 00:14:39 crc kubenswrapper[4676]: I0124 00:14:39.657809 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x57xf" event={"ID":"b88e9d2e-35da-45a8-ac7e-22afd660ff9f","Type":"ContainerStarted","Data":"088ce398c4ed98e98c167dcb59af037593455aada9daa8c18b5830c108862309"} Jan 24 00:14:43 crc kubenswrapper[4676]: I0124 00:14:43.887506 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2dqg" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.466309 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv"] Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.467842 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.469464 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.477857 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv"] Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.600152 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.600196 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.600257 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrc2\" (UniqueName: \"kubernetes.io/projected/79811b63-a3e6-47b0-8041-247b1536ba50-kube-api-access-vdrc2\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.701415 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.701810 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.702042 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.702288 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrc2\" (UniqueName: \"kubernetes.io/projected/79811b63-a3e6-47b0-8041-247b1536ba50-kube-api-access-vdrc2\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.702544 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.728085 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrc2\" (UniqueName: \"kubernetes.io/projected/79811b63-a3e6-47b0-8041-247b1536ba50-kube-api-access-vdrc2\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:54 crc kubenswrapper[4676]: I0124 00:14:54.783952 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:14:55 crc kubenswrapper[4676]: I0124 00:14:55.023158 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv"] Jan 24 00:14:55 crc kubenswrapper[4676]: I0124 00:14:55.763668 4676 generic.go:334] "Generic (PLEG): container finished" podID="79811b63-a3e6-47b0-8041-247b1536ba50" containerID="854c4496f04919c2bab64d3d7604ac339ddfc6ba556e38b6d39a05208f375eb4" exitCode=0 Jan 24 00:14:55 crc kubenswrapper[4676]: I0124 00:14:55.763754 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" event={"ID":"79811b63-a3e6-47b0-8041-247b1536ba50","Type":"ContainerDied","Data":"854c4496f04919c2bab64d3d7604ac339ddfc6ba556e38b6d39a05208f375eb4"} Jan 24 00:14:55 crc kubenswrapper[4676]: I0124 00:14:55.764208 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" event={"ID":"79811b63-a3e6-47b0-8041-247b1536ba50","Type":"ContainerStarted","Data":"2f8fc73d1311811b964eaef9ece132796c7141b06fe05cb4addc6171b93ccaed"} Jan 24 00:14:57 crc kubenswrapper[4676]: I0124 00:14:57.778263 4676 generic.go:334] "Generic (PLEG): container finished" podID="79811b63-a3e6-47b0-8041-247b1536ba50" containerID="efe746eccedd511dacaaf4516927d95056452db8cbdcc622f804b5020b8c32ea" exitCode=0 Jan 24 00:14:57 crc kubenswrapper[4676]: I0124 00:14:57.778404 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" event={"ID":"79811b63-a3e6-47b0-8041-247b1536ba50","Type":"ContainerDied","Data":"efe746eccedd511dacaaf4516927d95056452db8cbdcc622f804b5020b8c32ea"} Jan 24 00:14:58 crc kubenswrapper[4676]: I0124 00:14:58.792130 4676 generic.go:334] "Generic (PLEG): container finished" podID="79811b63-a3e6-47b0-8041-247b1536ba50" containerID="8477a53dd8b97bb84ef79ddf7cc0b2f8265d75e677d413a8a09a26e547220989" exitCode=0 Jan 24 00:14:58 crc kubenswrapper[4676]: I0124 00:14:58.792546 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" event={"ID":"79811b63-a3e6-47b0-8041-247b1536ba50","Type":"ContainerDied","Data":"8477a53dd8b97bb84ef79ddf7cc0b2f8265d75e677d413a8a09a26e547220989"} Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.101972 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.179470 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p"] Jan 24 00:15:00 crc kubenswrapper[4676]: E0124 00:15:00.179734 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79811b63-a3e6-47b0-8041-247b1536ba50" containerName="util" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.179754 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="79811b63-a3e6-47b0-8041-247b1536ba50" containerName="util" Jan 24 00:15:00 crc kubenswrapper[4676]: E0124 00:15:00.179770 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79811b63-a3e6-47b0-8041-247b1536ba50" containerName="pull" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.179778 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="79811b63-a3e6-47b0-8041-247b1536ba50" containerName="pull" Jan 24 00:15:00 crc kubenswrapper[4676]: E0124 00:15:00.179799 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79811b63-a3e6-47b0-8041-247b1536ba50" containerName="extract" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.179807 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="79811b63-a3e6-47b0-8041-247b1536ba50" containerName="extract" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.179931 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="79811b63-a3e6-47b0-8041-247b1536ba50" containerName="extract" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.180464 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.184135 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.184612 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.187171 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p"] Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.187618 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-bundle\") pod \"79811b63-a3e6-47b0-8041-247b1536ba50\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.187922 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-util\") pod \"79811b63-a3e6-47b0-8041-247b1536ba50\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.187987 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdrc2\" (UniqueName: \"kubernetes.io/projected/79811b63-a3e6-47b0-8041-247b1536ba50-kube-api-access-vdrc2\") pod \"79811b63-a3e6-47b0-8041-247b1536ba50\" (UID: \"79811b63-a3e6-47b0-8041-247b1536ba50\") " Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.189077 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-bundle" (OuterVolumeSpecName: "bundle") pod "79811b63-a3e6-47b0-8041-247b1536ba50" (UID: "79811b63-a3e6-47b0-8041-247b1536ba50"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.204641 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79811b63-a3e6-47b0-8041-247b1536ba50-kube-api-access-vdrc2" (OuterVolumeSpecName: "kube-api-access-vdrc2") pod "79811b63-a3e6-47b0-8041-247b1536ba50" (UID: "79811b63-a3e6-47b0-8041-247b1536ba50"). InnerVolumeSpecName "kube-api-access-vdrc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.212561 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-util" (OuterVolumeSpecName: "util") pod "79811b63-a3e6-47b0-8041-247b1536ba50" (UID: "79811b63-a3e6-47b0-8041-247b1536ba50"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.290067 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ef8e58-50e2-4cc5-b92c-35710605e5b1-secret-volume\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.290140 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bvwt\" (UniqueName: \"kubernetes.io/projected/40ef8e58-50e2-4cc5-b92c-35710605e5b1-kube-api-access-7bvwt\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.290185 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ef8e58-50e2-4cc5-b92c-35710605e5b1-config-volume\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.290280 4676 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-util\") on node \"crc\" DevicePath \"\"" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.290298 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdrc2\" (UniqueName: \"kubernetes.io/projected/79811b63-a3e6-47b0-8041-247b1536ba50-kube-api-access-vdrc2\") on node \"crc\" DevicePath \"\"" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.290314 4676 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79811b63-a3e6-47b0-8041-247b1536ba50-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.391005 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ef8e58-50e2-4cc5-b92c-35710605e5b1-config-volume\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.391101 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ef8e58-50e2-4cc5-b92c-35710605e5b1-secret-volume\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.391185 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bvwt\" (UniqueName: \"kubernetes.io/projected/40ef8e58-50e2-4cc5-b92c-35710605e5b1-kube-api-access-7bvwt\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.392216 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ef8e58-50e2-4cc5-b92c-35710605e5b1-config-volume\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.397389 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ef8e58-50e2-4cc5-b92c-35710605e5b1-secret-volume\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.417810 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bvwt\" (UniqueName: \"kubernetes.io/projected/40ef8e58-50e2-4cc5-b92c-35710605e5b1-kube-api-access-7bvwt\") pod \"collect-profiles-29486895-5v67p\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.494605 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.751504 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p"] Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.814151 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" event={"ID":"79811b63-a3e6-47b0-8041-247b1536ba50","Type":"ContainerDied","Data":"2f8fc73d1311811b964eaef9ece132796c7141b06fe05cb4addc6171b93ccaed"} Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.814503 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8fc73d1311811b964eaef9ece132796c7141b06fe05cb4addc6171b93ccaed" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.814627 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv" Jan 24 00:15:00 crc kubenswrapper[4676]: I0124 00:15:00.818020 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" event={"ID":"40ef8e58-50e2-4cc5-b92c-35710605e5b1","Type":"ContainerStarted","Data":"4e0a09189da28f9a231ef5cf2463bb087d76d53a5efd543b3017b1dc4cbb92ba"} Jan 24 00:15:01 crc kubenswrapper[4676]: I0124 00:15:01.826336 4676 generic.go:334] "Generic (PLEG): container finished" podID="40ef8e58-50e2-4cc5-b92c-35710605e5b1" containerID="1f9529fca33e87b8d8a2cf842fe194d301e46a27673eb8a957977d0bcb8f4f8d" exitCode=0 Jan 24 00:15:01 crc kubenswrapper[4676]: I0124 00:15:01.826412 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" event={"ID":"40ef8e58-50e2-4cc5-b92c-35710605e5b1","Type":"ContainerDied","Data":"1f9529fca33e87b8d8a2cf842fe194d301e46a27673eb8a957977d0bcb8f4f8d"} Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.016861 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.107422 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsbxx"] Jan 24 00:15:03 crc kubenswrapper[4676]: E0124 00:15:03.107660 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ef8e58-50e2-4cc5-b92c-35710605e5b1" containerName="collect-profiles" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.107671 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ef8e58-50e2-4cc5-b92c-35710605e5b1" containerName="collect-profiles" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.107795 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ef8e58-50e2-4cc5-b92c-35710605e5b1" containerName="collect-profiles" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.108204 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jsbxx" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.110575 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fpvgz" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.110612 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.110684 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.123410 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsbxx"] Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.130537 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ef8e58-50e2-4cc5-b92c-35710605e5b1-secret-volume\") pod \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.130632 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ef8e58-50e2-4cc5-b92c-35710605e5b1-config-volume\") pod \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.130670 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bvwt\" (UniqueName: \"kubernetes.io/projected/40ef8e58-50e2-4cc5-b92c-35710605e5b1-kube-api-access-7bvwt\") pod \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\" (UID: \"40ef8e58-50e2-4cc5-b92c-35710605e5b1\") " Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.131414 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ef8e58-50e2-4cc5-b92c-35710605e5b1-config-volume" (OuterVolumeSpecName: "config-volume") pod "40ef8e58-50e2-4cc5-b92c-35710605e5b1" (UID: "40ef8e58-50e2-4cc5-b92c-35710605e5b1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.136898 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ef8e58-50e2-4cc5-b92c-35710605e5b1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40ef8e58-50e2-4cc5-b92c-35710605e5b1" (UID: "40ef8e58-50e2-4cc5-b92c-35710605e5b1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.154602 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ef8e58-50e2-4cc5-b92c-35710605e5b1-kube-api-access-7bvwt" (OuterVolumeSpecName: "kube-api-access-7bvwt") pod "40ef8e58-50e2-4cc5-b92c-35710605e5b1" (UID: "40ef8e58-50e2-4cc5-b92c-35710605e5b1"). InnerVolumeSpecName "kube-api-access-7bvwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.233084 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9qjn\" (UniqueName: \"kubernetes.io/projected/efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2-kube-api-access-t9qjn\") pod \"nmstate-operator-646758c888-jsbxx\" (UID: \"efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsbxx" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.233227 4676 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ef8e58-50e2-4cc5-b92c-35710605e5b1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.233247 4676 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ef8e58-50e2-4cc5-b92c-35710605e5b1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.233259 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bvwt\" (UniqueName: \"kubernetes.io/projected/40ef8e58-50e2-4cc5-b92c-35710605e5b1-kube-api-access-7bvwt\") on node \"crc\" DevicePath \"\"" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.334868 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9qjn\" (UniqueName: \"kubernetes.io/projected/efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2-kube-api-access-t9qjn\") pod \"nmstate-operator-646758c888-jsbxx\" (UID: \"efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsbxx" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.356435 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9qjn\" (UniqueName: \"kubernetes.io/projected/efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2-kube-api-access-t9qjn\") pod \"nmstate-operator-646758c888-jsbxx\" (UID: \"efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsbxx" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.419915 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jsbxx" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.648882 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsbxx"] Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.837617 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jsbxx" event={"ID":"efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2","Type":"ContainerStarted","Data":"92cccb2822eb5ab7dff341249774a2f78675e8dd3c31865ae18bc7415a408904"} Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.839138 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" event={"ID":"40ef8e58-50e2-4cc5-b92c-35710605e5b1","Type":"ContainerDied","Data":"4e0a09189da28f9a231ef5cf2463bb087d76d53a5efd543b3017b1dc4cbb92ba"} Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.839171 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e0a09189da28f9a231ef5cf2463bb087d76d53a5efd543b3017b1dc4cbb92ba" Jan 24 00:15:03 crc kubenswrapper[4676]: I0124 00:15:03.839228 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p" Jan 24 00:15:06 crc kubenswrapper[4676]: I0124 00:15:06.863612 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jsbxx" event={"ID":"efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2","Type":"ContainerStarted","Data":"c6d3d119aa50a84f358aa734505a8382c68b2998b0fa588a9d3004b9bddda533"} Jan 24 00:15:06 crc kubenswrapper[4676]: I0124 00:15:06.890688 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-jsbxx" podStartSLOduration=1.595507754 podStartE2EDuration="3.890667203s" podCreationTimestamp="2026-01-24 00:15:03 +0000 UTC" firstStartedPulling="2026-01-24 00:15:03.66894515 +0000 UTC m=+687.698916151" lastFinishedPulling="2026-01-24 00:15:05.964104589 +0000 UTC m=+689.994075600" observedRunningTime="2026-01-24 00:15:06.886205706 +0000 UTC m=+690.916176747" watchObservedRunningTime="2026-01-24 00:15:06.890667203 +0000 UTC m=+690.920638214" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.348757 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rv2q7"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.350464 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.353287 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.353978 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.392080 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.396366 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.398002 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fgnjq" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.401005 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rv2q7"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.413475 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lkqsz"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.414589 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.511439 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-dbus-socket\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.511480 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-nmstate-lock\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.511504 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/badb3470-b60b-44ca-8d9e-52191ea016fa-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-cpbrv\" (UID: \"badb3470-b60b-44ca-8d9e-52191ea016fa\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.511529 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj6g7\" (UniqueName: \"kubernetes.io/projected/badb3470-b60b-44ca-8d9e-52191ea016fa-kube-api-access-fj6g7\") pod \"nmstate-webhook-8474b5b9d8-cpbrv\" (UID: \"badb3470-b60b-44ca-8d9e-52191ea016fa\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.511606 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdn9z\" (UniqueName: \"kubernetes.io/projected/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-kube-api-access-gdn9z\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.511622 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd422\" (UniqueName: \"kubernetes.io/projected/0c99caac-d9eb-494c-bd04-c18dbc8a0844-kube-api-access-dd422\") pod \"nmstate-metrics-54757c584b-rv2q7\" (UID: \"0c99caac-d9eb-494c-bd04-c18dbc8a0844\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.511638 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-ovs-socket\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.539462 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.540086 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.541892 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.542546 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.547232 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4znsj" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.548720 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613463 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdn9z\" (UniqueName: \"kubernetes.io/projected/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-kube-api-access-gdn9z\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613500 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd422\" (UniqueName: \"kubernetes.io/projected/0c99caac-d9eb-494c-bd04-c18dbc8a0844-kube-api-access-dd422\") pod \"nmstate-metrics-54757c584b-rv2q7\" (UID: \"0c99caac-d9eb-494c-bd04-c18dbc8a0844\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613526 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-ovs-socket\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613556 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-dbus-socket\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613572 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-nmstate-lock\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613589 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/badb3470-b60b-44ca-8d9e-52191ea016fa-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-cpbrv\" (UID: \"badb3470-b60b-44ca-8d9e-52191ea016fa\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613614 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj6g7\" (UniqueName: \"kubernetes.io/projected/badb3470-b60b-44ca-8d9e-52191ea016fa-kube-api-access-fj6g7\") pod \"nmstate-webhook-8474b5b9d8-cpbrv\" (UID: \"badb3470-b60b-44ca-8d9e-52191ea016fa\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613674 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-ovs-socket\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613697 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-nmstate-lock\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.613913 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-dbus-socket\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.628672 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/badb3470-b60b-44ca-8d9e-52191ea016fa-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-cpbrv\" (UID: \"badb3470-b60b-44ca-8d9e-52191ea016fa\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.630362 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdn9z\" (UniqueName: \"kubernetes.io/projected/6bdbc24e-ee52-41f0-9aaf-1091ac803c27-kube-api-access-gdn9z\") pod \"nmstate-handler-lkqsz\" (UID: \"6bdbc24e-ee52-41f0-9aaf-1091ac803c27\") " pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.631692 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd422\" (UniqueName: \"kubernetes.io/projected/0c99caac-d9eb-494c-bd04-c18dbc8a0844-kube-api-access-dd422\") pod \"nmstate-metrics-54757c584b-rv2q7\" (UID: \"0c99caac-d9eb-494c-bd04-c18dbc8a0844\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.636082 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj6g7\" (UniqueName: \"kubernetes.io/projected/badb3470-b60b-44ca-8d9e-52191ea016fa-kube-api-access-fj6g7\") pod \"nmstate-webhook-8474b5b9d8-cpbrv\" (UID: \"badb3470-b60b-44ca-8d9e-52191ea016fa\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.699775 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.708985 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.718026 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4db34b09-85e6-435d-b991-c2513eec5d17-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.718233 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4db34b09-85e6-435d-b991-c2513eec5d17-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.718254 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqm6\" (UniqueName: \"kubernetes.io/projected/4db34b09-85e6-435d-b991-c2513eec5d17-kube-api-access-lpqm6\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.739246 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-646b8b7948-5mzrb"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.739889 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.750746 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.766430 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-646b8b7948-5mzrb"] Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.822032 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4db34b09-85e6-435d-b991-c2513eec5d17-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.822068 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4db34b09-85e6-435d-b991-c2513eec5d17-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.822088 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqm6\" (UniqueName: \"kubernetes.io/projected/4db34b09-85e6-435d-b991-c2513eec5d17-kube-api-access-lpqm6\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.823344 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4db34b09-85e6-435d-b991-c2513eec5d17-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.832122 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4db34b09-85e6-435d-b991-c2513eec5d17-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.853394 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqm6\" (UniqueName: \"kubernetes.io/projected/4db34b09-85e6-435d-b991-c2513eec5d17-kube-api-access-lpqm6\") pod \"nmstate-console-plugin-7754f76f8b-95988\" (UID: \"4db34b09-85e6-435d-b991-c2513eec5d17\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.855871 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.923309 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-serving-cert\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.923349 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-config\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.923487 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgnzk\" (UniqueName: \"kubernetes.io/projected/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-kube-api-access-vgnzk\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.923527 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-trusted-ca-bundle\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.923618 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-service-ca\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.923645 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-oauth-config\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.923694 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-oauth-serving-cert\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:29 crc kubenswrapper[4676]: I0124 00:15:29.988226 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rv2q7"] Jan 24 00:15:30 crc kubenswrapper[4676]: W0124 00:15:30.005756 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c99caac_d9eb_494c_bd04_c18dbc8a0844.slice/crio-5c90ef64ad9fae698aa4621f61cb972076b24d792ea61941543af01d0c927a6f WatchSource:0}: Error finding container 5c90ef64ad9fae698aa4621f61cb972076b24d792ea61941543af01d0c927a6f: Status 404 returned error can't find the container with id 5c90ef64ad9fae698aa4621f61cb972076b24d792ea61941543af01d0c927a6f Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.010287 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv"] Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.014826 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lkqsz" event={"ID":"6bdbc24e-ee52-41f0-9aaf-1091ac803c27","Type":"ContainerStarted","Data":"35175506727db5307573d9cc30023df1695d2911019380bc72f5197925af315f"} Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.015954 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" event={"ID":"0c99caac-d9eb-494c-bd04-c18dbc8a0844","Type":"ContainerStarted","Data":"5c90ef64ad9fae698aa4621f61cb972076b24d792ea61941543af01d0c927a6f"} Jan 24 00:15:30 crc kubenswrapper[4676]: W0124 00:15:30.017342 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbadb3470_b60b_44ca_8d9e_52191ea016fa.slice/crio-7bdf627a13b1ccbdffa4ec8f369a655cf8fb07762595b6dc01fa8326be4a9925 WatchSource:0}: Error finding container 7bdf627a13b1ccbdffa4ec8f369a655cf8fb07762595b6dc01fa8326be4a9925: Status 404 returned error can't find the container with id 7bdf627a13b1ccbdffa4ec8f369a655cf8fb07762595b6dc01fa8326be4a9925 Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.025672 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-oauth-config\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.026664 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-oauth-serving-cert\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.026706 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-config\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.026756 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-serving-cert\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.026845 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgnzk\" (UniqueName: \"kubernetes.io/projected/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-kube-api-access-vgnzk\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.026945 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-trusted-ca-bundle\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.026993 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-service-ca\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.027545 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-oauth-serving-cert\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.027778 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-service-ca\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.028343 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-config\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.028949 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-trusted-ca-bundle\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.029800 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-oauth-config\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.034674 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-console-serving-cert\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.043805 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgnzk\" (UniqueName: \"kubernetes.io/projected/19d2088d-e8c0-4c76-be55-b5a9d3d1e681-kube-api-access-vgnzk\") pod \"console-646b8b7948-5mzrb\" (UID: \"19d2088d-e8c0-4c76-be55-b5a9d3d1e681\") " pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.072964 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.077107 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988"] Jan 24 00:15:30 crc kubenswrapper[4676]: I0124 00:15:30.238668 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-646b8b7948-5mzrb"] Jan 24 00:15:30 crc kubenswrapper[4676]: W0124 00:15:30.241572 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19d2088d_e8c0_4c76_be55_b5a9d3d1e681.slice/crio-e30dee6d031278dadfaa4547c9699eb6b480f4a1dc38a0850888cb9f5a2fb383 WatchSource:0}: Error finding container e30dee6d031278dadfaa4547c9699eb6b480f4a1dc38a0850888cb9f5a2fb383: Status 404 returned error can't find the container with id e30dee6d031278dadfaa4547c9699eb6b480f4a1dc38a0850888cb9f5a2fb383 Jan 24 00:15:31 crc kubenswrapper[4676]: I0124 00:15:31.047875 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-646b8b7948-5mzrb" event={"ID":"19d2088d-e8c0-4c76-be55-b5a9d3d1e681","Type":"ContainerStarted","Data":"fba11e043b13fe9ba986b50fe5a6a225e513e9c6b83f1d17947e2dcfb50b0d38"} Jan 24 00:15:31 crc kubenswrapper[4676]: I0124 00:15:31.048413 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-646b8b7948-5mzrb" event={"ID":"19d2088d-e8c0-4c76-be55-b5a9d3d1e681","Type":"ContainerStarted","Data":"e30dee6d031278dadfaa4547c9699eb6b480f4a1dc38a0850888cb9f5a2fb383"} Jan 24 00:15:31 crc kubenswrapper[4676]: I0124 00:15:31.053502 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" event={"ID":"4db34b09-85e6-435d-b991-c2513eec5d17","Type":"ContainerStarted","Data":"0f0fa59f140e0b22263007c638a8d7342517e2280615defc8f3853c0688da093"} Jan 24 00:15:31 crc kubenswrapper[4676]: I0124 00:15:31.060700 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" event={"ID":"badb3470-b60b-44ca-8d9e-52191ea016fa","Type":"ContainerStarted","Data":"7bdf627a13b1ccbdffa4ec8f369a655cf8fb07762595b6dc01fa8326be4a9925"} Jan 24 00:15:31 crc kubenswrapper[4676]: I0124 00:15:31.086192 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-646b8b7948-5mzrb" podStartSLOduration=2.086135577 podStartE2EDuration="2.086135577s" podCreationTimestamp="2026-01-24 00:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:15:31.081987651 +0000 UTC m=+715.111958692" watchObservedRunningTime="2026-01-24 00:15:31.086135577 +0000 UTC m=+715.116106588" Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.079320 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lkqsz" event={"ID":"6bdbc24e-ee52-41f0-9aaf-1091ac803c27","Type":"ContainerStarted","Data":"0aca8597998e206c4a952a56598aab99cefe88f62b281da6908483d638961f79"} Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.079934 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.083415 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" event={"ID":"4db34b09-85e6-435d-b991-c2513eec5d17","Type":"ContainerStarted","Data":"a0e665983abe577c8bc2a4fbbaf9624fceff77ef0f6b40eacd5d5cd4f0eaf012"} Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.085956 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" event={"ID":"badb3470-b60b-44ca-8d9e-52191ea016fa","Type":"ContainerStarted","Data":"9ac7a9454ee76866513183093ce28706ab20aa234cf23438a87dbbe2e43cbf9d"} Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.086592 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.088824 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" event={"ID":"0c99caac-d9eb-494c-bd04-c18dbc8a0844","Type":"ContainerStarted","Data":"40d4597c38bafb120668cbf6b639ed71e2620fa0bfa8625c6f6e949c4aedf8c6"} Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.102827 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lkqsz" podStartSLOduration=1.922074602 podStartE2EDuration="5.102806941s" podCreationTimestamp="2026-01-24 00:15:29 +0000 UTC" firstStartedPulling="2026-01-24 00:15:29.799973474 +0000 UTC m=+713.829944475" lastFinishedPulling="2026-01-24 00:15:32.980705813 +0000 UTC m=+717.010676814" observedRunningTime="2026-01-24 00:15:34.100747727 +0000 UTC m=+718.130718748" watchObservedRunningTime="2026-01-24 00:15:34.102806941 +0000 UTC m=+718.132777952" Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.167631 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" podStartSLOduration=2.209733919 podStartE2EDuration="5.167611032s" podCreationTimestamp="2026-01-24 00:15:29 +0000 UTC" firstStartedPulling="2026-01-24 00:15:30.019140668 +0000 UTC m=+714.049111669" lastFinishedPulling="2026-01-24 00:15:32.977017771 +0000 UTC m=+717.006988782" observedRunningTime="2026-01-24 00:15:34.121825472 +0000 UTC m=+718.151796493" watchObservedRunningTime="2026-01-24 00:15:34.167611032 +0000 UTC m=+718.197582033" Jan 24 00:15:34 crc kubenswrapper[4676]: I0124 00:15:34.169676 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-95988" podStartSLOduration=2.287322754 podStartE2EDuration="5.169666596s" podCreationTimestamp="2026-01-24 00:15:29 +0000 UTC" firstStartedPulling="2026-01-24 00:15:30.087260932 +0000 UTC m=+714.117231933" lastFinishedPulling="2026-01-24 00:15:32.969604754 +0000 UTC m=+716.999575775" observedRunningTime="2026-01-24 00:15:34.165990623 +0000 UTC m=+718.195961634" watchObservedRunningTime="2026-01-24 00:15:34.169666596 +0000 UTC m=+718.199637597" Jan 24 00:15:36 crc kubenswrapper[4676]: I0124 00:15:36.103190 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" event={"ID":"0c99caac-d9eb-494c-bd04-c18dbc8a0844","Type":"ContainerStarted","Data":"22e7763a5ed5b606f7dc1ac423a90590967569c2253c1453d63ea52f3965c491"} Jan 24 00:15:36 crc kubenswrapper[4676]: I0124 00:15:36.129775 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-rv2q7" podStartSLOduration=1.595215281 podStartE2EDuration="7.129751627s" podCreationTimestamp="2026-01-24 00:15:29 +0000 UTC" firstStartedPulling="2026-01-24 00:15:30.009921796 +0000 UTC m=+714.039892797" lastFinishedPulling="2026-01-24 00:15:35.544458142 +0000 UTC m=+719.574429143" observedRunningTime="2026-01-24 00:15:36.125966771 +0000 UTC m=+720.155937812" watchObservedRunningTime="2026-01-24 00:15:36.129751627 +0000 UTC m=+720.159722668" Jan 24 00:15:39 crc kubenswrapper[4676]: I0124 00:15:39.364259 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:15:39 crc kubenswrapper[4676]: I0124 00:15:39.364733 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:15:39 crc kubenswrapper[4676]: I0124 00:15:39.793612 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lkqsz" Jan 24 00:15:40 crc kubenswrapper[4676]: I0124 00:15:40.073704 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:40 crc kubenswrapper[4676]: I0124 00:15:40.073783 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:40 crc kubenswrapper[4676]: I0124 00:15:40.082163 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:40 crc kubenswrapper[4676]: I0124 00:15:40.148836 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-646b8b7948-5mzrb" Jan 24 00:15:40 crc kubenswrapper[4676]: I0124 00:15:40.222977 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2smk"] Jan 24 00:15:49 crc kubenswrapper[4676]: I0124 00:15:49.718306 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cpbrv" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.278326 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-g2smk" podUID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" containerName="console" containerID="cri-o://c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250" gracePeriod=15 Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.624666 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2smk_5cce043a-2f1b-4f48-967e-c48a00cfe1a6/console/0.log" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.624947 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699077 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-serving-cert\") pod \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699112 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6dpw\" (UniqueName: \"kubernetes.io/projected/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-kube-api-access-f6dpw\") pod \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699163 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-service-ca\") pod \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699243 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-oauth-config\") pod \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699270 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-config\") pod \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699288 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-oauth-serving-cert\") pod \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699312 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-trusted-ca-bundle\") pod \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\" (UID: \"5cce043a-2f1b-4f48-967e-c48a00cfe1a6\") " Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699879 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5cce043a-2f1b-4f48-967e-c48a00cfe1a6" (UID: "5cce043a-2f1b-4f48-967e-c48a00cfe1a6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.699889 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-service-ca" (OuterVolumeSpecName: "service-ca") pod "5cce043a-2f1b-4f48-967e-c48a00cfe1a6" (UID: "5cce043a-2f1b-4f48-967e-c48a00cfe1a6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.700131 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-config" (OuterVolumeSpecName: "console-config") pod "5cce043a-2f1b-4f48-967e-c48a00cfe1a6" (UID: "5cce043a-2f1b-4f48-967e-c48a00cfe1a6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.700287 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5cce043a-2f1b-4f48-967e-c48a00cfe1a6" (UID: "5cce043a-2f1b-4f48-967e-c48a00cfe1a6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.704066 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-kube-api-access-f6dpw" (OuterVolumeSpecName: "kube-api-access-f6dpw") pod "5cce043a-2f1b-4f48-967e-c48a00cfe1a6" (UID: "5cce043a-2f1b-4f48-967e-c48a00cfe1a6"). InnerVolumeSpecName "kube-api-access-f6dpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.704131 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5cce043a-2f1b-4f48-967e-c48a00cfe1a6" (UID: "5cce043a-2f1b-4f48-967e-c48a00cfe1a6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.707191 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5cce043a-2f1b-4f48-967e-c48a00cfe1a6" (UID: "5cce043a-2f1b-4f48-967e-c48a00cfe1a6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.801042 4676 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.801084 4676 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.801103 4676 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.801123 4676 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.801139 4676 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.801156 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6dpw\" (UniqueName: \"kubernetes.io/projected/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-kube-api-access-f6dpw\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.801173 4676 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cce043a-2f1b-4f48-967e-c48a00cfe1a6-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.893976 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j"] Jan 24 00:16:05 crc kubenswrapper[4676]: E0124 00:16:05.894288 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" containerName="console" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.894307 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" containerName="console" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.894849 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" containerName="console" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.896070 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.898100 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.902093 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.902167 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wz4\" (UniqueName: \"kubernetes.io/projected/3c111688-154b-47fa-8f89-6e48007b1fec-kube-api-access-p7wz4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.902194 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:05 crc kubenswrapper[4676]: I0124 00:16:05.914590 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j"] Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.003409 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wz4\" (UniqueName: \"kubernetes.io/projected/3c111688-154b-47fa-8f89-6e48007b1fec-kube-api-access-p7wz4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.003651 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.003765 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.004291 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.004514 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.025262 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wz4\" (UniqueName: \"kubernetes.io/projected/3c111688-154b-47fa-8f89-6e48007b1fec-kube-api-access-p7wz4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.221636 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.312106 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2smk_5cce043a-2f1b-4f48-967e-c48a00cfe1a6/console/0.log" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.312169 4676 generic.go:334] "Generic (PLEG): container finished" podID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" containerID="c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250" exitCode=2 Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.312207 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2smk" event={"ID":"5cce043a-2f1b-4f48-967e-c48a00cfe1a6","Type":"ContainerDied","Data":"c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250"} Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.312245 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2smk" event={"ID":"5cce043a-2f1b-4f48-967e-c48a00cfe1a6","Type":"ContainerDied","Data":"8150da0c7bf43ea052c8b9cb2dfefad3eaa9d5a8d1321ad37559f3b798f3d407"} Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.312287 4676 scope.go:117] "RemoveContainer" containerID="c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.312470 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2smk" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.332567 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2smk"] Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.340974 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-g2smk"] Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.350198 4676 scope.go:117] "RemoveContainer" containerID="c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250" Jan 24 00:16:06 crc kubenswrapper[4676]: E0124 00:16:06.351840 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250\": container with ID starting with c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250 not found: ID does not exist" containerID="c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.351884 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250"} err="failed to get container status \"c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250\": rpc error: code = NotFound desc = could not find container \"c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250\": container with ID starting with c3846591c5d0c38a4ca3a3f7394a46d77743449e795f56e0255610d9941b7250 not found: ID does not exist" Jan 24 00:16:06 crc kubenswrapper[4676]: E0124 00:16:06.414496 4676 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cce043a_2f1b_4f48_967e_c48a00cfe1a6.slice/crio-8150da0c7bf43ea052c8b9cb2dfefad3eaa9d5a8d1321ad37559f3b798f3d407\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cce043a_2f1b_4f48_967e_c48a00cfe1a6.slice\": RecentStats: unable to find data in memory cache]" Jan 24 00:16:06 crc kubenswrapper[4676]: I0124 00:16:06.486850 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j"] Jan 24 00:16:07 crc kubenswrapper[4676]: I0124 00:16:07.320423 4676 generic.go:334] "Generic (PLEG): container finished" podID="3c111688-154b-47fa-8f89-6e48007b1fec" containerID="a83c10df42bff0ddbfa91eb239302d1389b7d6db480c439fd5f94c85af065f2d" exitCode=0 Jan 24 00:16:07 crc kubenswrapper[4676]: I0124 00:16:07.320556 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" event={"ID":"3c111688-154b-47fa-8f89-6e48007b1fec","Type":"ContainerDied","Data":"a83c10df42bff0ddbfa91eb239302d1389b7d6db480c439fd5f94c85af065f2d"} Jan 24 00:16:07 crc kubenswrapper[4676]: I0124 00:16:07.324197 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" event={"ID":"3c111688-154b-47fa-8f89-6e48007b1fec","Type":"ContainerStarted","Data":"76a3eda93838b6bb0e5c18b77075ac901cd959dcd48b40594a45fdb7a357e745"} Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.246056 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nfz8s"] Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.248020 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.273698 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cce043a-2f1b-4f48-967e-c48a00cfe1a6" path="/var/lib/kubelet/pods/5cce043a-2f1b-4f48-967e-c48a00cfe1a6/volumes" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.274891 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nfz8s"] Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.444933 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-utilities\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.445017 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hnz4\" (UniqueName: \"kubernetes.io/projected/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-kube-api-access-9hnz4\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.445165 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-catalog-content\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.546336 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-catalog-content\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.546421 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-utilities\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.546462 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hnz4\" (UniqueName: \"kubernetes.io/projected/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-kube-api-access-9hnz4\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.547263 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-catalog-content\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.547586 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-utilities\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.566590 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hnz4\" (UniqueName: \"kubernetes.io/projected/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-kube-api-access-9hnz4\") pod \"redhat-operators-nfz8s\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.576932 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:08 crc kubenswrapper[4676]: I0124 00:16:08.787263 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nfz8s"] Jan 24 00:16:08 crc kubenswrapper[4676]: W0124 00:16:08.798803 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca7145d4_57a1_4ad4_acc8_b7ca02cfd407.slice/crio-8153314f2ab7211dbb0879d3369667b48e7906201af8f125d1bc6d0c80e1209a WatchSource:0}: Error finding container 8153314f2ab7211dbb0879d3369667b48e7906201af8f125d1bc6d0c80e1209a: Status 404 returned error can't find the container with id 8153314f2ab7211dbb0879d3369667b48e7906201af8f125d1bc6d0c80e1209a Jan 24 00:16:09 crc kubenswrapper[4676]: I0124 00:16:09.337984 4676 generic.go:334] "Generic (PLEG): container finished" podID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerID="2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9" exitCode=0 Jan 24 00:16:09 crc kubenswrapper[4676]: I0124 00:16:09.338131 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfz8s" event={"ID":"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407","Type":"ContainerDied","Data":"2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9"} Jan 24 00:16:09 crc kubenswrapper[4676]: I0124 00:16:09.338500 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfz8s" event={"ID":"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407","Type":"ContainerStarted","Data":"8153314f2ab7211dbb0879d3369667b48e7906201af8f125d1bc6d0c80e1209a"} Jan 24 00:16:09 crc kubenswrapper[4676]: I0124 00:16:09.364453 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:16:09 crc kubenswrapper[4676]: I0124 00:16:09.364515 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:16:11 crc kubenswrapper[4676]: I0124 00:16:11.310460 4676 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 00:16:12 crc kubenswrapper[4676]: I0124 00:16:12.357957 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfz8s" event={"ID":"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407","Type":"ContainerStarted","Data":"ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c"} Jan 24 00:16:12 crc kubenswrapper[4676]: I0124 00:16:12.360220 4676 generic.go:334] "Generic (PLEG): container finished" podID="3c111688-154b-47fa-8f89-6e48007b1fec" containerID="38dee7997e1559ef2efa2b8cdca200f0488dba16c0166cda27d0639e2bae7090" exitCode=0 Jan 24 00:16:12 crc kubenswrapper[4676]: I0124 00:16:12.360277 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" event={"ID":"3c111688-154b-47fa-8f89-6e48007b1fec","Type":"ContainerDied","Data":"38dee7997e1559ef2efa2b8cdca200f0488dba16c0166cda27d0639e2bae7090"} Jan 24 00:16:13 crc kubenswrapper[4676]: I0124 00:16:13.370444 4676 generic.go:334] "Generic (PLEG): container finished" podID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerID="ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c" exitCode=0 Jan 24 00:16:13 crc kubenswrapper[4676]: I0124 00:16:13.370495 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfz8s" event={"ID":"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407","Type":"ContainerDied","Data":"ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c"} Jan 24 00:16:13 crc kubenswrapper[4676]: I0124 00:16:13.375783 4676 generic.go:334] "Generic (PLEG): container finished" podID="3c111688-154b-47fa-8f89-6e48007b1fec" containerID="6e8cb074fed9adee705d124b9629ba61703c8cee8afacdcc25b520591cb5c008" exitCode=0 Jan 24 00:16:13 crc kubenswrapper[4676]: I0124 00:16:13.375845 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" event={"ID":"3c111688-154b-47fa-8f89-6e48007b1fec","Type":"ContainerDied","Data":"6e8cb074fed9adee705d124b9629ba61703c8cee8afacdcc25b520591cb5c008"} Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.383642 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfz8s" event={"ID":"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407","Type":"ContainerStarted","Data":"d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005"} Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.406675 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nfz8s" podStartSLOduration=2.85597101 podStartE2EDuration="6.40665859s" podCreationTimestamp="2026-01-24 00:16:08 +0000 UTC" firstStartedPulling="2026-01-24 00:16:10.349212378 +0000 UTC m=+754.379183379" lastFinishedPulling="2026-01-24 00:16:13.899899958 +0000 UTC m=+757.929870959" observedRunningTime="2026-01-24 00:16:14.403128602 +0000 UTC m=+758.433099613" watchObservedRunningTime="2026-01-24 00:16:14.40665859 +0000 UTC m=+758.436629591" Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.625266 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.632141 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7wz4\" (UniqueName: \"kubernetes.io/projected/3c111688-154b-47fa-8f89-6e48007b1fec-kube-api-access-p7wz4\") pod \"3c111688-154b-47fa-8f89-6e48007b1fec\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.632203 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-bundle\") pod \"3c111688-154b-47fa-8f89-6e48007b1fec\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.632230 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-util\") pod \"3c111688-154b-47fa-8f89-6e48007b1fec\" (UID: \"3c111688-154b-47fa-8f89-6e48007b1fec\") " Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.633601 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-bundle" (OuterVolumeSpecName: "bundle") pod "3c111688-154b-47fa-8f89-6e48007b1fec" (UID: "3c111688-154b-47fa-8f89-6e48007b1fec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.638732 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c111688-154b-47fa-8f89-6e48007b1fec-kube-api-access-p7wz4" (OuterVolumeSpecName: "kube-api-access-p7wz4") pod "3c111688-154b-47fa-8f89-6e48007b1fec" (UID: "3c111688-154b-47fa-8f89-6e48007b1fec"). InnerVolumeSpecName "kube-api-access-p7wz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.643395 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-util" (OuterVolumeSpecName: "util") pod "3c111688-154b-47fa-8f89-6e48007b1fec" (UID: "3c111688-154b-47fa-8f89-6e48007b1fec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.733083 4676 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.733114 4676 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c111688-154b-47fa-8f89-6e48007b1fec-util\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:14 crc kubenswrapper[4676]: I0124 00:16:14.733125 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7wz4\" (UniqueName: \"kubernetes.io/projected/3c111688-154b-47fa-8f89-6e48007b1fec-kube-api-access-p7wz4\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:15 crc kubenswrapper[4676]: I0124 00:16:15.393828 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" event={"ID":"3c111688-154b-47fa-8f89-6e48007b1fec","Type":"ContainerDied","Data":"76a3eda93838b6bb0e5c18b77075ac901cd959dcd48b40594a45fdb7a357e745"} Jan 24 00:16:15 crc kubenswrapper[4676]: I0124 00:16:15.394130 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a3eda93838b6bb0e5c18b77075ac901cd959dcd48b40594a45fdb7a357e745" Jan 24 00:16:15 crc kubenswrapper[4676]: I0124 00:16:15.393908 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j" Jan 24 00:16:18 crc kubenswrapper[4676]: I0124 00:16:18.577493 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:18 crc kubenswrapper[4676]: I0124 00:16:18.577563 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:19 crc kubenswrapper[4676]: I0124 00:16:19.630781 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nfz8s" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="registry-server" probeResult="failure" output=< Jan 24 00:16:19 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:16:19 crc kubenswrapper[4676]: > Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.008950 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb"] Jan 24 00:16:24 crc kubenswrapper[4676]: E0124 00:16:24.009357 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c111688-154b-47fa-8f89-6e48007b1fec" containerName="util" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.009367 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c111688-154b-47fa-8f89-6e48007b1fec" containerName="util" Jan 24 00:16:24 crc kubenswrapper[4676]: E0124 00:16:24.009390 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c111688-154b-47fa-8f89-6e48007b1fec" containerName="pull" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.009396 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c111688-154b-47fa-8f89-6e48007b1fec" containerName="pull" Jan 24 00:16:24 crc kubenswrapper[4676]: E0124 00:16:24.009408 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c111688-154b-47fa-8f89-6e48007b1fec" containerName="extract" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.009414 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c111688-154b-47fa-8f89-6e48007b1fec" containerName="extract" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.009505 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c111688-154b-47fa-8f89-6e48007b1fec" containerName="extract" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.010033 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.012683 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.013062 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fgjmf" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.013812 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.013977 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.014105 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.040786 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb"] Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.149881 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrnlp\" (UniqueName: \"kubernetes.io/projected/59200cad-bd1a-472a-a1a1-adccb5211b21-kube-api-access-mrnlp\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.149954 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59200cad-bd1a-472a-a1a1-adccb5211b21-webhook-cert\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.150092 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59200cad-bd1a-472a-a1a1-adccb5211b21-apiservice-cert\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.246574 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5"] Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.247449 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.250519 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.250941 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrnlp\" (UniqueName: \"kubernetes.io/projected/59200cad-bd1a-472a-a1a1-adccb5211b21-kube-api-access-mrnlp\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.250991 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59200cad-bd1a-472a-a1a1-adccb5211b21-webhook-cert\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.251018 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59200cad-bd1a-472a-a1a1-adccb5211b21-apiservice-cert\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.252945 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.253026 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ckwmv" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.257179 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59200cad-bd1a-472a-a1a1-adccb5211b21-apiservice-cert\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.266247 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59200cad-bd1a-472a-a1a1-adccb5211b21-webhook-cert\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.288080 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrnlp\" (UniqueName: \"kubernetes.io/projected/59200cad-bd1a-472a-a1a1-adccb5211b21-kube-api-access-mrnlp\") pod \"metallb-operator-controller-manager-6868db74d6-qfhwb\" (UID: \"59200cad-bd1a-472a-a1a1-adccb5211b21\") " pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.310554 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5"] Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.323753 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.352217 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvj86\" (UniqueName: \"kubernetes.io/projected/adfe0dac-5ac5-44b8-97db-088d1ac83d34-kube-api-access-tvj86\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.353025 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adfe0dac-5ac5-44b8-97db-088d1ac83d34-apiservice-cert\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.353211 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adfe0dac-5ac5-44b8-97db-088d1ac83d34-webhook-cert\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.458825 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvj86\" (UniqueName: \"kubernetes.io/projected/adfe0dac-5ac5-44b8-97db-088d1ac83d34-kube-api-access-tvj86\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.458863 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adfe0dac-5ac5-44b8-97db-088d1ac83d34-apiservice-cert\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.458931 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adfe0dac-5ac5-44b8-97db-088d1ac83d34-webhook-cert\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.462946 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adfe0dac-5ac5-44b8-97db-088d1ac83d34-apiservice-cert\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.463337 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adfe0dac-5ac5-44b8-97db-088d1ac83d34-webhook-cert\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.477665 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvj86\" (UniqueName: \"kubernetes.io/projected/adfe0dac-5ac5-44b8-97db-088d1ac83d34-kube-api-access-tvj86\") pod \"metallb-operator-webhook-server-6dbffff8f5-cb5l5\" (UID: \"adfe0dac-5ac5-44b8-97db-088d1ac83d34\") " pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.592680 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.642652 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb"] Jan 24 00:16:24 crc kubenswrapper[4676]: I0124 00:16:24.924935 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5"] Jan 24 00:16:24 crc kubenswrapper[4676]: W0124 00:16:24.933169 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadfe0dac_5ac5_44b8_97db_088d1ac83d34.slice/crio-50d12f22e386ef9c2b41817b928dd900066a8e50105761cc692396dae78c8ec4 WatchSource:0}: Error finding container 50d12f22e386ef9c2b41817b928dd900066a8e50105761cc692396dae78c8ec4: Status 404 returned error can't find the container with id 50d12f22e386ef9c2b41817b928dd900066a8e50105761cc692396dae78c8ec4 Jan 24 00:16:25 crc kubenswrapper[4676]: I0124 00:16:25.446475 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" event={"ID":"59200cad-bd1a-472a-a1a1-adccb5211b21","Type":"ContainerStarted","Data":"0cb2a0d9fb39b905916b5fcc5105590a8bc9706a24b1d330dea1ede36464be07"} Jan 24 00:16:25 crc kubenswrapper[4676]: I0124 00:16:25.448792 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" event={"ID":"adfe0dac-5ac5-44b8-97db-088d1ac83d34","Type":"ContainerStarted","Data":"50d12f22e386ef9c2b41817b928dd900066a8e50105761cc692396dae78c8ec4"} Jan 24 00:16:28 crc kubenswrapper[4676]: I0124 00:16:28.624574 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:28 crc kubenswrapper[4676]: I0124 00:16:28.676158 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:28 crc kubenswrapper[4676]: I0124 00:16:28.856525 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nfz8s"] Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.475534 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" event={"ID":"59200cad-bd1a-472a-a1a1-adccb5211b21","Type":"ContainerStarted","Data":"aa93327a37132aaaf13467f36f40f5999ca32c6929f4734fba4815d29bef0ab0"} Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.476329 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.477276 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" event={"ID":"adfe0dac-5ac5-44b8-97db-088d1ac83d34","Type":"ContainerStarted","Data":"a4a1604634fabd24c3814d8510a89d532d058c19481ab88cd0b114d36f236462"} Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.477530 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nfz8s" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="registry-server" containerID="cri-o://d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005" gracePeriod=2 Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.516689 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" podStartSLOduration=2.030895947 podStartE2EDuration="7.516670154s" podCreationTimestamp="2026-01-24 00:16:23 +0000 UTC" firstStartedPulling="2026-01-24 00:16:24.686541083 +0000 UTC m=+768.716512084" lastFinishedPulling="2026-01-24 00:16:30.17231529 +0000 UTC m=+774.202286291" observedRunningTime="2026-01-24 00:16:30.513046613 +0000 UTC m=+774.543017614" watchObservedRunningTime="2026-01-24 00:16:30.516670154 +0000 UTC m=+774.546641155" Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.539338 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" podStartSLOduration=1.284091543 podStartE2EDuration="6.539322397s" podCreationTimestamp="2026-01-24 00:16:24 +0000 UTC" firstStartedPulling="2026-01-24 00:16:24.936477379 +0000 UTC m=+768.966448380" lastFinishedPulling="2026-01-24 00:16:30.191708233 +0000 UTC m=+774.221679234" observedRunningTime="2026-01-24 00:16:30.537861902 +0000 UTC m=+774.567832903" watchObservedRunningTime="2026-01-24 00:16:30.539322397 +0000 UTC m=+774.569293398" Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.844912 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.956347 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-catalog-content\") pod \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.956529 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-utilities\") pod \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.956589 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hnz4\" (UniqueName: \"kubernetes.io/projected/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-kube-api-access-9hnz4\") pod \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\" (UID: \"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407\") " Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.958066 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-utilities" (OuterVolumeSpecName: "utilities") pod "ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" (UID: "ca7145d4-57a1-4ad4-acc8-b7ca02cfd407"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:16:30 crc kubenswrapper[4676]: I0124 00:16:30.967289 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-kube-api-access-9hnz4" (OuterVolumeSpecName: "kube-api-access-9hnz4") pod "ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" (UID: "ca7145d4-57a1-4ad4-acc8-b7ca02cfd407"). InnerVolumeSpecName "kube-api-access-9hnz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.057789 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.057829 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hnz4\" (UniqueName: \"kubernetes.io/projected/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-kube-api-access-9hnz4\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.100233 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" (UID: "ca7145d4-57a1-4ad4-acc8-b7ca02cfd407"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.158568 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.493987 4676 generic.go:334] "Generic (PLEG): container finished" podID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerID="d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005" exitCode=0 Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.494091 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfz8s" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.494187 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfz8s" event={"ID":"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407","Type":"ContainerDied","Data":"d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005"} Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.495186 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfz8s" event={"ID":"ca7145d4-57a1-4ad4-acc8-b7ca02cfd407","Type":"ContainerDied","Data":"8153314f2ab7211dbb0879d3369667b48e7906201af8f125d1bc6d0c80e1209a"} Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.495225 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.495258 4676 scope.go:117] "RemoveContainer" containerID="d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.525594 4676 scope.go:117] "RemoveContainer" containerID="ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.525656 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nfz8s"] Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.535155 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nfz8s"] Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.549372 4676 scope.go:117] "RemoveContainer" containerID="2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.573684 4676 scope.go:117] "RemoveContainer" containerID="d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005" Jan 24 00:16:31 crc kubenswrapper[4676]: E0124 00:16:31.574181 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005\": container with ID starting with d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005 not found: ID does not exist" containerID="d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.574219 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005"} err="failed to get container status \"d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005\": rpc error: code = NotFound desc = could not find container \"d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005\": container with ID starting with d1d85267432a4570db782b58a91df6a8201f6bad193dc5837dfdbc50d8377005 not found: ID does not exist" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.574245 4676 scope.go:117] "RemoveContainer" containerID="ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c" Jan 24 00:16:31 crc kubenswrapper[4676]: E0124 00:16:31.574918 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c\": container with ID starting with ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c not found: ID does not exist" containerID="ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.574944 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c"} err="failed to get container status \"ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c\": rpc error: code = NotFound desc = could not find container \"ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c\": container with ID starting with ab3b8aade9c01793933d385fb272beb855e6bd497784c470d8ff64768ede8f1c not found: ID does not exist" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.574962 4676 scope.go:117] "RemoveContainer" containerID="2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9" Jan 24 00:16:31 crc kubenswrapper[4676]: E0124 00:16:31.575307 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9\": container with ID starting with 2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9 not found: ID does not exist" containerID="2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9" Jan 24 00:16:31 crc kubenswrapper[4676]: I0124 00:16:31.575355 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9"} err="failed to get container status \"2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9\": rpc error: code = NotFound desc = could not find container \"2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9\": container with ID starting with 2687c4aebd5cf8c6cdb52a49834465bc2ff9fdc1624a9f7ea9eb31f3d327a7a9 not found: ID does not exist" Jan 24 00:16:32 crc kubenswrapper[4676]: I0124 00:16:32.264319 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" path="/var/lib/kubelet/pods/ca7145d4-57a1-4ad4-acc8-b7ca02cfd407/volumes" Jan 24 00:16:39 crc kubenswrapper[4676]: I0124 00:16:39.364334 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:16:39 crc kubenswrapper[4676]: I0124 00:16:39.364804 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:16:39 crc kubenswrapper[4676]: I0124 00:16:39.364849 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:16:39 crc kubenswrapper[4676]: I0124 00:16:39.365392 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"daf4dfd81dc7faee8c5a37cce872ffde5731f2d91708788dd42d2993fec18ba6"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:16:39 crc kubenswrapper[4676]: I0124 00:16:39.365440 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://daf4dfd81dc7faee8c5a37cce872ffde5731f2d91708788dd42d2993fec18ba6" gracePeriod=600 Jan 24 00:16:40 crc kubenswrapper[4676]: I0124 00:16:40.544345 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="daf4dfd81dc7faee8c5a37cce872ffde5731f2d91708788dd42d2993fec18ba6" exitCode=0 Jan 24 00:16:40 crc kubenswrapper[4676]: I0124 00:16:40.544409 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"daf4dfd81dc7faee8c5a37cce872ffde5731f2d91708788dd42d2993fec18ba6"} Jan 24 00:16:40 crc kubenswrapper[4676]: I0124 00:16:40.544683 4676 scope.go:117] "RemoveContainer" containerID="51936651fd255c79ecefd9c199d1a0336083a72c956e86d582874060fc907470" Jan 24 00:16:41 crc kubenswrapper[4676]: I0124 00:16:41.566858 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"2687110039e3aba350a72ed3647bbafb008d22f301a8b50baa7159c6eca5ba33"} Jan 24 00:16:44 crc kubenswrapper[4676]: I0124 00:16:44.601148 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6dbffff8f5-cb5l5" Jan 24 00:17:04 crc kubenswrapper[4676]: I0124 00:17:04.328369 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6868db74d6-qfhwb" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.276133 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-szpqt"] Jan 24 00:17:05 crc kubenswrapper[4676]: E0124 00:17:05.276430 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="registry-server" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.276448 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="registry-server" Jan 24 00:17:05 crc kubenswrapper[4676]: E0124 00:17:05.276458 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="extract-content" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.276465 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="extract-content" Jan 24 00:17:05 crc kubenswrapper[4676]: E0124 00:17:05.276482 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="extract-utilities" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.276488 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="extract-utilities" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.276592 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca7145d4-57a1-4ad4-acc8-b7ca02cfd407" containerName="registry-server" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.278405 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.281572 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59"] Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.282499 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.282967 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.287114 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.287610 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-98z5t" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.296631 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59"] Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.298030 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.389284 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-zh6td"] Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.390139 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.393013 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.393191 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cbd85" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.393552 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.393874 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402477 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-reloader\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402530 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-conf\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402586 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b5c2\" (UniqueName: \"kubernetes.io/projected/8a7798b0-f97f-4804-a406-22ecc8b45677-kube-api-access-7b5c2\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402614 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftkn7\" (UniqueName: \"kubernetes.io/projected/2ce22d83-ee4f-4ad7-8882-b876d4ed52a2-kube-api-access-ftkn7\") pod \"frr-k8s-webhook-server-7df86c4f6c-s7b59\" (UID: \"2ce22d83-ee4f-4ad7-8882-b876d4ed52a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402637 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-startup\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402661 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce22d83-ee4f-4ad7-8882-b876d4ed52a2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-s7b59\" (UID: \"2ce22d83-ee4f-4ad7-8882-b876d4ed52a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402675 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-sockets\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402690 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.402727 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics-certs\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.408399 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-jqk4f"] Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.409139 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.410495 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.430685 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-jqk4f"] Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.503817 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-startup\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.503854 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce22d83-ee4f-4ad7-8882-b876d4ed52a2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-s7b59\" (UID: \"2ce22d83-ee4f-4ad7-8882-b876d4ed52a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.503872 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-sockets\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.503891 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.503923 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics-certs\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.503947 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5517856-fac7-4312-ab46-86bbd5c1282d-cert\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.503998 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwl55\" (UniqueName: \"kubernetes.io/projected/7da3019e-01de-4671-a78f-6c0d2e57fde3-kube-api-access-qwl55\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504018 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-reloader\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: E0124 00:17:05.504046 4676 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504068 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-metrics-certs\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: E0124 00:17:05.504117 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics-certs podName:8a7798b0-f97f-4804-a406-22ecc8b45677 nodeName:}" failed. No retries permitted until 2026-01-24 00:17:06.004099798 +0000 UTC m=+810.034070799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics-certs") pod "frr-k8s-szpqt" (UID: "8a7798b0-f97f-4804-a406-22ecc8b45677") : secret "frr-k8s-certs-secret" not found Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504148 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5517856-fac7-4312-ab46-86bbd5c1282d-metrics-certs\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504172 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-conf\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504194 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7da3019e-01de-4671-a78f-6c0d2e57fde3-metallb-excludel2\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504220 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b5c2\" (UniqueName: \"kubernetes.io/projected/8a7798b0-f97f-4804-a406-22ecc8b45677-kube-api-access-7b5c2\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504243 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrkgb\" (UniqueName: \"kubernetes.io/projected/f5517856-fac7-4312-ab46-86bbd5c1282d-kube-api-access-rrkgb\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504267 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftkn7\" (UniqueName: \"kubernetes.io/projected/2ce22d83-ee4f-4ad7-8882-b876d4ed52a2-kube-api-access-ftkn7\") pod \"frr-k8s-webhook-server-7df86c4f6c-s7b59\" (UID: \"2ce22d83-ee4f-4ad7-8882-b876d4ed52a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504293 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504369 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504542 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-conf\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504748 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-sockets\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504771 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8a7798b0-f97f-4804-a406-22ecc8b45677-frr-startup\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.504888 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8a7798b0-f97f-4804-a406-22ecc8b45677-reloader\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.522312 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce22d83-ee4f-4ad7-8882-b876d4ed52a2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-s7b59\" (UID: \"2ce22d83-ee4f-4ad7-8882-b876d4ed52a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.524841 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b5c2\" (UniqueName: \"kubernetes.io/projected/8a7798b0-f97f-4804-a406-22ecc8b45677-kube-api-access-7b5c2\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.525416 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftkn7\" (UniqueName: \"kubernetes.io/projected/2ce22d83-ee4f-4ad7-8882-b876d4ed52a2-kube-api-access-ftkn7\") pod \"frr-k8s-webhook-server-7df86c4f6c-s7b59\" (UID: \"2ce22d83-ee4f-4ad7-8882-b876d4ed52a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.602494 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.604834 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5517856-fac7-4312-ab46-86bbd5c1282d-metrics-certs\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.604869 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7da3019e-01de-4671-a78f-6c0d2e57fde3-metallb-excludel2\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.604900 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrkgb\" (UniqueName: \"kubernetes.io/projected/f5517856-fac7-4312-ab46-86bbd5c1282d-kube-api-access-rrkgb\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.604922 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.604974 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5517856-fac7-4312-ab46-86bbd5c1282d-cert\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.604992 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwl55\" (UniqueName: \"kubernetes.io/projected/7da3019e-01de-4671-a78f-6c0d2e57fde3-kube-api-access-qwl55\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.605012 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-metrics-certs\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.606699 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7da3019e-01de-4671-a78f-6c0d2e57fde3-metallb-excludel2\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: E0124 00:17:05.605610 4676 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 00:17:05 crc kubenswrapper[4676]: E0124 00:17:05.606813 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist podName:7da3019e-01de-4671-a78f-6c0d2e57fde3 nodeName:}" failed. No retries permitted until 2026-01-24 00:17:06.10678807 +0000 UTC m=+810.136759071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist") pod "speaker-zh6td" (UID: "7da3019e-01de-4671-a78f-6c0d2e57fde3") : secret "metallb-memberlist" not found Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.607700 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-metrics-certs\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.613479 4676 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.633445 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwl55\" (UniqueName: \"kubernetes.io/projected/7da3019e-01de-4671-a78f-6c0d2e57fde3-kube-api-access-qwl55\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.635573 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrkgb\" (UniqueName: \"kubernetes.io/projected/f5517856-fac7-4312-ab46-86bbd5c1282d-kube-api-access-rrkgb\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.637869 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5517856-fac7-4312-ab46-86bbd5c1282d-metrics-certs\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.638417 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5517856-fac7-4312-ab46-86bbd5c1282d-cert\") pod \"controller-6968d8fdc4-jqk4f\" (UID: \"f5517856-fac7-4312-ab46-86bbd5c1282d\") " pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.720748 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.839368 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59"] Jan 24 00:17:05 crc kubenswrapper[4676]: I0124 00:17:05.897880 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-jqk4f"] Jan 24 00:17:05 crc kubenswrapper[4676]: W0124 00:17:05.902574 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5517856_fac7_4312_ab46_86bbd5c1282d.slice/crio-7067f52856da71bef338b31656e506d7ff1da9055f6a009db9a859bfeee222b0 WatchSource:0}: Error finding container 7067f52856da71bef338b31656e506d7ff1da9055f6a009db9a859bfeee222b0: Status 404 returned error can't find the container with id 7067f52856da71bef338b31656e506d7ff1da9055f6a009db9a859bfeee222b0 Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.021187 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics-certs\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.026306 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a7798b0-f97f-4804-a406-22ecc8b45677-metrics-certs\") pod \"frr-k8s-szpqt\" (UID: \"8a7798b0-f97f-4804-a406-22ecc8b45677\") " pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.121911 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:06 crc kubenswrapper[4676]: E0124 00:17:06.122069 4676 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 00:17:06 crc kubenswrapper[4676]: E0124 00:17:06.122138 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist podName:7da3019e-01de-4671-a78f-6c0d2e57fde3 nodeName:}" failed. No retries permitted until 2026-01-24 00:17:07.122118644 +0000 UTC m=+811.152089665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist") pod "speaker-zh6td" (UID: "7da3019e-01de-4671-a78f-6c0d2e57fde3") : secret "metallb-memberlist" not found Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.197817 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.731877 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" event={"ID":"2ce22d83-ee4f-4ad7-8882-b876d4ed52a2","Type":"ContainerStarted","Data":"a334528e9fcce16672cb9c179bec89136fc19677c417e99af539b99acfae4ef1"} Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.733498 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jqk4f" event={"ID":"f5517856-fac7-4312-ab46-86bbd5c1282d","Type":"ContainerStarted","Data":"2c5cc057854c659ba305a531384a04657496d8ab3c63ea3c510ee6ee420d51fe"} Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.733523 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jqk4f" event={"ID":"f5517856-fac7-4312-ab46-86bbd5c1282d","Type":"ContainerStarted","Data":"68aec8e91b27920e6ce721982604cb53f4298058a221854b41ee0607d62c9cc5"} Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.733537 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jqk4f" event={"ID":"f5517856-fac7-4312-ab46-86bbd5c1282d","Type":"ContainerStarted","Data":"7067f52856da71bef338b31656e506d7ff1da9055f6a009db9a859bfeee222b0"} Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.733583 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.734619 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerStarted","Data":"a476192e18a4d4b2e0b1eb57b0acf4c20d24261e27c719e71607eef42208dee8"} Jan 24 00:17:06 crc kubenswrapper[4676]: I0124 00:17:06.751926 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-jqk4f" podStartSLOduration=1.75190868 podStartE2EDuration="1.75190868s" podCreationTimestamp="2026-01-24 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:17:06.74896235 +0000 UTC m=+810.778933351" watchObservedRunningTime="2026-01-24 00:17:06.75190868 +0000 UTC m=+810.781879681" Jan 24 00:17:07 crc kubenswrapper[4676]: I0124 00:17:07.139080 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:07 crc kubenswrapper[4676]: I0124 00:17:07.150629 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7da3019e-01de-4671-a78f-6c0d2e57fde3-memberlist\") pod \"speaker-zh6td\" (UID: \"7da3019e-01de-4671-a78f-6c0d2e57fde3\") " pod="metallb-system/speaker-zh6td" Jan 24 00:17:07 crc kubenswrapper[4676]: I0124 00:17:07.202545 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zh6td" Jan 24 00:17:07 crc kubenswrapper[4676]: I0124 00:17:07.741800 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zh6td" event={"ID":"7da3019e-01de-4671-a78f-6c0d2e57fde3","Type":"ContainerStarted","Data":"b5974bf994f467296463e48aa1ec39d04a14897f0084f65a861afbe278a93e9b"} Jan 24 00:17:07 crc kubenswrapper[4676]: I0124 00:17:07.742116 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zh6td" event={"ID":"7da3019e-01de-4671-a78f-6c0d2e57fde3","Type":"ContainerStarted","Data":"f25a970d7f4afe68b1d16f8a6f98e4e45811ab615364d8b0c2c28c0b4488306f"} Jan 24 00:17:08 crc kubenswrapper[4676]: I0124 00:17:08.793930 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zh6td" event={"ID":"7da3019e-01de-4671-a78f-6c0d2e57fde3","Type":"ContainerStarted","Data":"54b95f7fa5ad58bcb9fb952d4e0d97ba3049a39c032a908cae1f94b4d39a039a"} Jan 24 00:17:08 crc kubenswrapper[4676]: I0124 00:17:08.795014 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-zh6td" Jan 24 00:17:08 crc kubenswrapper[4676]: I0124 00:17:08.844740 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-zh6td" podStartSLOduration=3.8447223619999997 podStartE2EDuration="3.844722362s" podCreationTimestamp="2026-01-24 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:17:08.840692578 +0000 UTC m=+812.870663579" watchObservedRunningTime="2026-01-24 00:17:08.844722362 +0000 UTC m=+812.874693363" Jan 24 00:17:14 crc kubenswrapper[4676]: I0124 00:17:14.825659 4676 generic.go:334] "Generic (PLEG): container finished" podID="8a7798b0-f97f-4804-a406-22ecc8b45677" containerID="0f872ee63e1cdba2e1e4761f82b3a98b3f10875ae88354fd9c9a65bcdf49ada5" exitCode=0 Jan 24 00:17:14 crc kubenswrapper[4676]: I0124 00:17:14.825757 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerDied","Data":"0f872ee63e1cdba2e1e4761f82b3a98b3f10875ae88354fd9c9a65bcdf49ada5"} Jan 24 00:17:14 crc kubenswrapper[4676]: I0124 00:17:14.828887 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" event={"ID":"2ce22d83-ee4f-4ad7-8882-b876d4ed52a2","Type":"ContainerStarted","Data":"9468eea7ba989d37d33d89820231269843b9aa5eb2e025ae0bc27dc696636687"} Jan 24 00:17:14 crc kubenswrapper[4676]: I0124 00:17:14.829006 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:14 crc kubenswrapper[4676]: I0124 00:17:14.876915 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" podStartSLOduration=1.765514596 podStartE2EDuration="9.876898442s" podCreationTimestamp="2026-01-24 00:17:05 +0000 UTC" firstStartedPulling="2026-01-24 00:17:05.850059932 +0000 UTC m=+809.880030933" lastFinishedPulling="2026-01-24 00:17:13.961443768 +0000 UTC m=+817.991414779" observedRunningTime="2026-01-24 00:17:14.875897322 +0000 UTC m=+818.905868323" watchObservedRunningTime="2026-01-24 00:17:14.876898442 +0000 UTC m=+818.906869443" Jan 24 00:17:15 crc kubenswrapper[4676]: I0124 00:17:15.845770 4676 generic.go:334] "Generic (PLEG): container finished" podID="8a7798b0-f97f-4804-a406-22ecc8b45677" containerID="985d356157001ea96677721ef7091464e7dcd5614907239f9ff5a6b869083e2c" exitCode=0 Jan 24 00:17:15 crc kubenswrapper[4676]: I0124 00:17:15.845992 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerDied","Data":"985d356157001ea96677721ef7091464e7dcd5614907239f9ff5a6b869083e2c"} Jan 24 00:17:16 crc kubenswrapper[4676]: I0124 00:17:16.854046 4676 generic.go:334] "Generic (PLEG): container finished" podID="8a7798b0-f97f-4804-a406-22ecc8b45677" containerID="45b168d21f1a0451303b67d18b62f8d1c2eb635d47fbb9c410ec73ac7826281e" exitCode=0 Jan 24 00:17:16 crc kubenswrapper[4676]: I0124 00:17:16.854096 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerDied","Data":"45b168d21f1a0451303b67d18b62f8d1c2eb635d47fbb9c410ec73ac7826281e"} Jan 24 00:17:17 crc kubenswrapper[4676]: I0124 00:17:17.207833 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-zh6td" Jan 24 00:17:17 crc kubenswrapper[4676]: I0124 00:17:17.865900 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerStarted","Data":"639934f83046a208ac6c36d423b7b7a8a2fc9ac7a06f9d72cd3f40649b7625a5"} Jan 24 00:17:18 crc kubenswrapper[4676]: I0124 00:17:18.877180 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerStarted","Data":"89222040cea7839f9811fefb567adfa49b75cbc63cb29f34599ec76da946001d"} Jan 24 00:17:18 crc kubenswrapper[4676]: I0124 00:17:18.877512 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:18 crc kubenswrapper[4676]: I0124 00:17:18.877527 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerStarted","Data":"32cddb0f809234929d53e50a67f6ddf97e1ee1e6251d8a1c2347acefe6889e3d"} Jan 24 00:17:18 crc kubenswrapper[4676]: I0124 00:17:18.877540 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerStarted","Data":"657915650d3d7e15f1f43806bf755675fc8ae3f14e29e2da6a3a5c061c6a364b"} Jan 24 00:17:18 crc kubenswrapper[4676]: I0124 00:17:18.877552 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerStarted","Data":"ebfd023ac87216fefc6a03fae974686e76d94aa90ae27e027fffdfe8da7ac3a9"} Jan 24 00:17:18 crc kubenswrapper[4676]: I0124 00:17:18.877565 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-szpqt" event={"ID":"8a7798b0-f97f-4804-a406-22ecc8b45677","Type":"ContainerStarted","Data":"a2e58590b3d95ad6600c527f73595ba81f0848ac3129fd5349dd5406a3270855"} Jan 24 00:17:18 crc kubenswrapper[4676]: I0124 00:17:18.903964 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-szpqt" podStartSLOduration=6.241574195 podStartE2EDuration="13.903945115s" podCreationTimestamp="2026-01-24 00:17:05 +0000 UTC" firstStartedPulling="2026-01-24 00:17:06.322812944 +0000 UTC m=+810.352783945" lastFinishedPulling="2026-01-24 00:17:13.985183854 +0000 UTC m=+818.015154865" observedRunningTime="2026-01-24 00:17:18.898804137 +0000 UTC m=+822.928775138" watchObservedRunningTime="2026-01-24 00:17:18.903945115 +0000 UTC m=+822.933916136" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.212251 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6ntrt"] Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.213610 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6ntrt" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.216629 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.217322 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jsvzn" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.221173 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.233648 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6ntrt"] Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.337199 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vfp\" (UniqueName: \"kubernetes.io/projected/5425e3ea-c929-4943-bd14-d4e193992052-kube-api-access-r4vfp\") pod \"openstack-operator-index-6ntrt\" (UID: \"5425e3ea-c929-4943-bd14-d4e193992052\") " pod="openstack-operators/openstack-operator-index-6ntrt" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.439304 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vfp\" (UniqueName: \"kubernetes.io/projected/5425e3ea-c929-4943-bd14-d4e193992052-kube-api-access-r4vfp\") pod \"openstack-operator-index-6ntrt\" (UID: \"5425e3ea-c929-4943-bd14-d4e193992052\") " pod="openstack-operators/openstack-operator-index-6ntrt" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.459212 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vfp\" (UniqueName: \"kubernetes.io/projected/5425e3ea-c929-4943-bd14-d4e193992052-kube-api-access-r4vfp\") pod \"openstack-operator-index-6ntrt\" (UID: \"5425e3ea-c929-4943-bd14-d4e193992052\") " pod="openstack-operators/openstack-operator-index-6ntrt" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.533641 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6ntrt" Jan 24 00:17:20 crc kubenswrapper[4676]: I0124 00:17:20.964987 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6ntrt"] Jan 24 00:17:21 crc kubenswrapper[4676]: I0124 00:17:21.198442 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:21 crc kubenswrapper[4676]: I0124 00:17:21.245917 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:21 crc kubenswrapper[4676]: I0124 00:17:21.893708 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6ntrt" event={"ID":"5425e3ea-c929-4943-bd14-d4e193992052","Type":"ContainerStarted","Data":"ee09d2706d4eadeb85bdfce3c04bc857b26be38730265e45634501b3005889aa"} Jan 24 00:17:23 crc kubenswrapper[4676]: I0124 00:17:23.908625 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6ntrt" event={"ID":"5425e3ea-c929-4943-bd14-d4e193992052","Type":"ContainerStarted","Data":"7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d"} Jan 24 00:17:23 crc kubenswrapper[4676]: I0124 00:17:23.928102 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6ntrt" podStartSLOduration=1.600803535 podStartE2EDuration="3.928078419s" podCreationTimestamp="2026-01-24 00:17:20 +0000 UTC" firstStartedPulling="2026-01-24 00:17:20.9749779 +0000 UTC m=+825.004948911" lastFinishedPulling="2026-01-24 00:17:23.302252794 +0000 UTC m=+827.332223795" observedRunningTime="2026-01-24 00:17:23.925763398 +0000 UTC m=+827.955734399" watchObservedRunningTime="2026-01-24 00:17:23.928078419 +0000 UTC m=+827.958049460" Jan 24 00:17:24 crc kubenswrapper[4676]: I0124 00:17:24.388562 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6ntrt"] Jan 24 00:17:24 crc kubenswrapper[4676]: I0124 00:17:24.987118 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kw9l8"] Jan 24 00:17:24 crc kubenswrapper[4676]: I0124 00:17:24.988239 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:24 crc kubenswrapper[4676]: I0124 00:17:24.997907 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kw9l8"] Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.004974 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qpx6\" (UniqueName: \"kubernetes.io/projected/25146f99-405a-4473-bf27-69a7195a3338-kube-api-access-6qpx6\") pod \"openstack-operator-index-kw9l8\" (UID: \"25146f99-405a-4473-bf27-69a7195a3338\") " pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.106462 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qpx6\" (UniqueName: \"kubernetes.io/projected/25146f99-405a-4473-bf27-69a7195a3338-kube-api-access-6qpx6\") pod \"openstack-operator-index-kw9l8\" (UID: \"25146f99-405a-4473-bf27-69a7195a3338\") " pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.133085 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qpx6\" (UniqueName: \"kubernetes.io/projected/25146f99-405a-4473-bf27-69a7195a3338-kube-api-access-6qpx6\") pod \"openstack-operator-index-kw9l8\" (UID: \"25146f99-405a-4473-bf27-69a7195a3338\") " pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.317249 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.551424 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kw9l8"] Jan 24 00:17:25 crc kubenswrapper[4676]: W0124 00:17:25.557032 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25146f99_405a_4473_bf27_69a7195a3338.slice/crio-17da9ac2c23c5c6cad3d8f4d82d1f4430a5a586f1e9a35c05032d087359c1bd6 WatchSource:0}: Error finding container 17da9ac2c23c5c6cad3d8f4d82d1f4430a5a586f1e9a35c05032d087359c1bd6: Status 404 returned error can't find the container with id 17da9ac2c23c5c6cad3d8f4d82d1f4430a5a586f1e9a35c05032d087359c1bd6 Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.606899 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s7b59" Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.725761 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-jqk4f" Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.921294 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kw9l8" event={"ID":"25146f99-405a-4473-bf27-69a7195a3338","Type":"ContainerStarted","Data":"17da9ac2c23c5c6cad3d8f4d82d1f4430a5a586f1e9a35c05032d087359c1bd6"} Jan 24 00:17:25 crc kubenswrapper[4676]: I0124 00:17:25.921493 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6ntrt" podUID="5425e3ea-c929-4943-bd14-d4e193992052" containerName="registry-server" containerID="cri-o://7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d" gracePeriod=2 Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.820581 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6ntrt" Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.852849 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4vfp\" (UniqueName: \"kubernetes.io/projected/5425e3ea-c929-4943-bd14-d4e193992052-kube-api-access-r4vfp\") pod \"5425e3ea-c929-4943-bd14-d4e193992052\" (UID: \"5425e3ea-c929-4943-bd14-d4e193992052\") " Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.861170 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5425e3ea-c929-4943-bd14-d4e193992052-kube-api-access-r4vfp" (OuterVolumeSpecName: "kube-api-access-r4vfp") pod "5425e3ea-c929-4943-bd14-d4e193992052" (UID: "5425e3ea-c929-4943-bd14-d4e193992052"). InnerVolumeSpecName "kube-api-access-r4vfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.930045 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kw9l8" event={"ID":"25146f99-405a-4473-bf27-69a7195a3338","Type":"ContainerStarted","Data":"ae7af5773b0c0c3022a37af8482a6211017babb29f2d4c0ba014e258b1e9ee90"} Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.935305 4676 generic.go:334] "Generic (PLEG): container finished" podID="5425e3ea-c929-4943-bd14-d4e193992052" containerID="7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d" exitCode=0 Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.935354 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6ntrt" event={"ID":"5425e3ea-c929-4943-bd14-d4e193992052","Type":"ContainerDied","Data":"7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d"} Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.935406 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6ntrt" event={"ID":"5425e3ea-c929-4943-bd14-d4e193992052","Type":"ContainerDied","Data":"ee09d2706d4eadeb85bdfce3c04bc857b26be38730265e45634501b3005889aa"} Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.935431 4676 scope.go:117] "RemoveContainer" containerID="7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d" Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.935439 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6ntrt" Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.958495 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4vfp\" (UniqueName: \"kubernetes.io/projected/5425e3ea-c929-4943-bd14-d4e193992052-kube-api-access-r4vfp\") on node \"crc\" DevicePath \"\"" Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.960509 4676 scope.go:117] "RemoveContainer" containerID="7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d" Jan 24 00:17:26 crc kubenswrapper[4676]: E0124 00:17:26.961285 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d\": container with ID starting with 7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d not found: ID does not exist" containerID="7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d" Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.961324 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d"} err="failed to get container status \"7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d\": rpc error: code = NotFound desc = could not find container \"7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d\": container with ID starting with 7921f77247ccf3e14c1bcf7202d66aacc6d4736b70b42e45031192bfc3dc839d not found: ID does not exist" Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.966926 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kw9l8" podStartSLOduration=2.887173703 podStartE2EDuration="2.966905371s" podCreationTimestamp="2026-01-24 00:17:24 +0000 UTC" firstStartedPulling="2026-01-24 00:17:25.56125536 +0000 UTC m=+829.591226361" lastFinishedPulling="2026-01-24 00:17:25.640987028 +0000 UTC m=+829.670958029" observedRunningTime="2026-01-24 00:17:26.960839375 +0000 UTC m=+830.990810386" watchObservedRunningTime="2026-01-24 00:17:26.966905371 +0000 UTC m=+830.996876392" Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.984206 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6ntrt"] Jan 24 00:17:26 crc kubenswrapper[4676]: I0124 00:17:26.987906 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6ntrt"] Jan 24 00:17:28 crc kubenswrapper[4676]: I0124 00:17:28.269298 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5425e3ea-c929-4943-bd14-d4e193992052" path="/var/lib/kubelet/pods/5425e3ea-c929-4943-bd14-d4e193992052/volumes" Jan 24 00:17:35 crc kubenswrapper[4676]: I0124 00:17:35.318561 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:35 crc kubenswrapper[4676]: I0124 00:17:35.319639 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:35 crc kubenswrapper[4676]: I0124 00:17:35.359546 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.057041 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kw9l8" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.202116 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-szpqt" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.684408 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789"] Jan 24 00:17:36 crc kubenswrapper[4676]: E0124 00:17:36.684680 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5425e3ea-c929-4943-bd14-d4e193992052" containerName="registry-server" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.684700 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="5425e3ea-c929-4943-bd14-d4e193992052" containerName="registry-server" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.684865 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="5425e3ea-c929-4943-bd14-d4e193992052" containerName="registry-server" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.685940 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.690144 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jhh24" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.710195 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789"] Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.803663 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-util\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.803721 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2vn\" (UniqueName: \"kubernetes.io/projected/9d669669-26ba-4775-b7cc-e97cc7dbe326-kube-api-access-dx2vn\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.803742 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-bundle\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.905147 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-util\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.905246 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2vn\" (UniqueName: \"kubernetes.io/projected/9d669669-26ba-4775-b7cc-e97cc7dbe326-kube-api-access-dx2vn\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.905312 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-bundle\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.905765 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-util\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.905782 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-bundle\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:36 crc kubenswrapper[4676]: I0124 00:17:36.925230 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2vn\" (UniqueName: \"kubernetes.io/projected/9d669669-26ba-4775-b7cc-e97cc7dbe326-kube-api-access-dx2vn\") pod \"ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:37 crc kubenswrapper[4676]: I0124 00:17:37.006054 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:37 crc kubenswrapper[4676]: I0124 00:17:37.444215 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789"] Jan 24 00:17:38 crc kubenswrapper[4676]: I0124 00:17:38.038925 4676 generic.go:334] "Generic (PLEG): container finished" podID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerID="febce8b3323e91f83e00c3dd649ad413f2af7374de871b23981687ad243e2b66" exitCode=0 Jan 24 00:17:38 crc kubenswrapper[4676]: I0124 00:17:38.039111 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" event={"ID":"9d669669-26ba-4775-b7cc-e97cc7dbe326","Type":"ContainerDied","Data":"febce8b3323e91f83e00c3dd649ad413f2af7374de871b23981687ad243e2b66"} Jan 24 00:17:38 crc kubenswrapper[4676]: I0124 00:17:38.039362 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" event={"ID":"9d669669-26ba-4775-b7cc-e97cc7dbe326","Type":"ContainerStarted","Data":"6e080ebc2338efd89d347cecd9a36fade586fddfd61171c9fe2f90594f0f5697"} Jan 24 00:17:39 crc kubenswrapper[4676]: I0124 00:17:39.052081 4676 generic.go:334] "Generic (PLEG): container finished" podID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerID="33cd5040f326fb3e4a05af3f6c86f27b17dad7629e9dd1835e1deaaa7b71a1ab" exitCode=0 Jan 24 00:17:39 crc kubenswrapper[4676]: I0124 00:17:39.052172 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" event={"ID":"9d669669-26ba-4775-b7cc-e97cc7dbe326","Type":"ContainerDied","Data":"33cd5040f326fb3e4a05af3f6c86f27b17dad7629e9dd1835e1deaaa7b71a1ab"} Jan 24 00:17:40 crc kubenswrapper[4676]: I0124 00:17:40.062915 4676 generic.go:334] "Generic (PLEG): container finished" podID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerID="0096dd21fa12fac790c05fdeef3584e1e18ecb50242cb75bfb3734240a194e22" exitCode=0 Jan 24 00:17:40 crc kubenswrapper[4676]: I0124 00:17:40.062967 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" event={"ID":"9d669669-26ba-4775-b7cc-e97cc7dbe326","Type":"ContainerDied","Data":"0096dd21fa12fac790c05fdeef3584e1e18ecb50242cb75bfb3734240a194e22"} Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.351459 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.388820 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx2vn\" (UniqueName: \"kubernetes.io/projected/9d669669-26ba-4775-b7cc-e97cc7dbe326-kube-api-access-dx2vn\") pod \"9d669669-26ba-4775-b7cc-e97cc7dbe326\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.388912 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-util\") pod \"9d669669-26ba-4775-b7cc-e97cc7dbe326\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.388962 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-bundle\") pod \"9d669669-26ba-4775-b7cc-e97cc7dbe326\" (UID: \"9d669669-26ba-4775-b7cc-e97cc7dbe326\") " Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.389950 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-bundle" (OuterVolumeSpecName: "bundle") pod "9d669669-26ba-4775-b7cc-e97cc7dbe326" (UID: "9d669669-26ba-4775-b7cc-e97cc7dbe326"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.395547 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d669669-26ba-4775-b7cc-e97cc7dbe326-kube-api-access-dx2vn" (OuterVolumeSpecName: "kube-api-access-dx2vn") pod "9d669669-26ba-4775-b7cc-e97cc7dbe326" (UID: "9d669669-26ba-4775-b7cc-e97cc7dbe326"). InnerVolumeSpecName "kube-api-access-dx2vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.406740 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-util" (OuterVolumeSpecName: "util") pod "9d669669-26ba-4775-b7cc-e97cc7dbe326" (UID: "9d669669-26ba-4775-b7cc-e97cc7dbe326"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.491305 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx2vn\" (UniqueName: \"kubernetes.io/projected/9d669669-26ba-4775-b7cc-e97cc7dbe326-kube-api-access-dx2vn\") on node \"crc\" DevicePath \"\"" Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.491342 4676 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-util\") on node \"crc\" DevicePath \"\"" Jan 24 00:17:41 crc kubenswrapper[4676]: I0124 00:17:41.491354 4676 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d669669-26ba-4775-b7cc-e97cc7dbe326-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:17:42 crc kubenswrapper[4676]: I0124 00:17:42.080313 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" event={"ID":"9d669669-26ba-4775-b7cc-e97cc7dbe326","Type":"ContainerDied","Data":"6e080ebc2338efd89d347cecd9a36fade586fddfd61171c9fe2f90594f0f5697"} Jan 24 00:17:42 crc kubenswrapper[4676]: I0124 00:17:42.080677 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e080ebc2338efd89d347cecd9a36fade586fddfd61171c9fe2f90594f0f5697" Jan 24 00:17:42 crc kubenswrapper[4676]: I0124 00:17:42.080456 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.760098 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr"] Jan 24 00:17:44 crc kubenswrapper[4676]: E0124 00:17:44.760822 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerName="pull" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.760841 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerName="pull" Jan 24 00:17:44 crc kubenswrapper[4676]: E0124 00:17:44.760859 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerName="util" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.760869 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerName="util" Jan 24 00:17:44 crc kubenswrapper[4676]: E0124 00:17:44.760895 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerName="extract" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.760905 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerName="extract" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.761059 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d669669-26ba-4775-b7cc-e97cc7dbe326" containerName="extract" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.761633 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.764218 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-w8lw6" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.837085 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjp56\" (UniqueName: \"kubernetes.io/projected/07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6-kube-api-access-fjp56\") pod \"openstack-operator-controller-init-69647cdbc5-96fbr\" (UID: \"07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6\") " pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.861223 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr"] Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.938882 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjp56\" (UniqueName: \"kubernetes.io/projected/07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6-kube-api-access-fjp56\") pod \"openstack-operator-controller-init-69647cdbc5-96fbr\" (UID: \"07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6\") " pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" Jan 24 00:17:44 crc kubenswrapper[4676]: I0124 00:17:44.957208 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjp56\" (UniqueName: \"kubernetes.io/projected/07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6-kube-api-access-fjp56\") pod \"openstack-operator-controller-init-69647cdbc5-96fbr\" (UID: \"07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6\") " pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" Jan 24 00:17:45 crc kubenswrapper[4676]: I0124 00:17:45.076682 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" Jan 24 00:17:45 crc kubenswrapper[4676]: I0124 00:17:45.555195 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr"] Jan 24 00:17:46 crc kubenswrapper[4676]: I0124 00:17:46.110492 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" event={"ID":"07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6","Type":"ContainerStarted","Data":"d5895aa75d2cdcc2aa52ae1b3ab7d21be12e0afd307d310c4434c4bef4953f6c"} Jan 24 00:17:50 crc kubenswrapper[4676]: I0124 00:17:50.146367 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" event={"ID":"07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6","Type":"ContainerStarted","Data":"7cf047d5cb51ded3ea0d506ac23e1396d51d95d1b31f3ab0ba6365be60ad4afd"} Jan 24 00:17:50 crc kubenswrapper[4676]: I0124 00:17:50.146919 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" Jan 24 00:17:50 crc kubenswrapper[4676]: I0124 00:17:50.178351 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" podStartSLOduration=1.752634459 podStartE2EDuration="6.178334955s" podCreationTimestamp="2026-01-24 00:17:44 +0000 UTC" firstStartedPulling="2026-01-24 00:17:45.562475097 +0000 UTC m=+849.592446108" lastFinishedPulling="2026-01-24 00:17:49.988175603 +0000 UTC m=+854.018146604" observedRunningTime="2026-01-24 00:17:50.173692392 +0000 UTC m=+854.203663403" watchObservedRunningTime="2026-01-24 00:17:50.178334955 +0000 UTC m=+854.208305966" Jan 24 00:17:55 crc kubenswrapper[4676]: I0124 00:17:55.080135 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-69647cdbc5-96fbr" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.234554 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.236051 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.238021 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-f2s6k" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.246619 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.266098 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.266728 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.276592 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-svdrx" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.290049 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.303824 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.320151 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.337389 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-g8q5n" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.343046 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.343788 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.345900 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ncsvx" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.346803 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsns8\" (UniqueName: \"kubernetes.io/projected/02123851-7d2f-477b-9c60-5a9922a0bc97-kube-api-access-bsns8\") pod \"barbican-operator-controller-manager-7f86f8796f-fgplq\" (UID: \"02123851-7d2f-477b-9c60-5a9922a0bc97\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.346921 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92kpk\" (UniqueName: \"kubernetes.io/projected/29e4b64d-19bd-419b-9e29-7a41e6f12ae0-kube-api-access-92kpk\") pod \"glance-operator-controller-manager-78fdd796fd-btxnv\" (UID: \"29e4b64d-19bd-419b-9e29-7a41e6f12ae0\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.347005 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7g5p\" (UniqueName: \"kubernetes.io/projected/0cd05b9f-6699-46e3-ae36-9f21352e6c8e-kube-api-access-q7g5p\") pod \"cinder-operator-controller-manager-69cf5d4557-c8c6m\" (UID: \"0cd05b9f-6699-46e3-ae36-9f21352e6c8e\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.373219 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.384948 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.386111 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.391613 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.409272 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.415046 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-r5w45" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.426890 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.427672 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.430847 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.431481 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.448603 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9hs\" (UniqueName: \"kubernetes.io/projected/6b8541f9-a37a-41d6-8006-3d0335c3abb5-kube-api-access-6f9hs\") pod \"horizon-operator-controller-manager-77d5c5b54f-rbzsj\" (UID: \"6b8541f9-a37a-41d6-8006-3d0335c3abb5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.448649 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw5zz\" (UniqueName: \"kubernetes.io/projected/921e121c-5261-4fe7-8171-6b634babedf4-kube-api-access-hw5zz\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.448694 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsns8\" (UniqueName: \"kubernetes.io/projected/02123851-7d2f-477b-9c60-5a9922a0bc97-kube-api-access-bsns8\") pod \"barbican-operator-controller-manager-7f86f8796f-fgplq\" (UID: \"02123851-7d2f-477b-9c60-5a9922a0bc97\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.448721 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.448743 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsl2n\" (UniqueName: \"kubernetes.io/projected/5e9cf1cb-c413-45ad-8a51-bf35407fcdfe-kube-api-access-qsl2n\") pod \"heat-operator-controller-manager-594c8c9d5d-p2nr8\" (UID: \"5e9cf1cb-c413-45ad-8a51-bf35407fcdfe\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.448771 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2lp4\" (UniqueName: \"kubernetes.io/projected/a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85-kube-api-access-c2lp4\") pod \"designate-operator-controller-manager-b45d7bf98-js6db\" (UID: \"a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.448795 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92kpk\" (UniqueName: \"kubernetes.io/projected/29e4b64d-19bd-419b-9e29-7a41e6f12ae0-kube-api-access-92kpk\") pod \"glance-operator-controller-manager-78fdd796fd-btxnv\" (UID: \"29e4b64d-19bd-419b-9e29-7a41e6f12ae0\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.448822 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7g5p\" (UniqueName: \"kubernetes.io/projected/0cd05b9f-6699-46e3-ae36-9f21352e6c8e-kube-api-access-q7g5p\") pod \"cinder-operator-controller-manager-69cf5d4557-c8c6m\" (UID: \"0cd05b9f-6699-46e3-ae36-9f21352e6c8e\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.456268 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.470071 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.473890 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.474414 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kcp2q" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.474658 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.474871 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j96cn" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.480718 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-69mq8" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.493424 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.511258 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.515060 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7g5p\" (UniqueName: \"kubernetes.io/projected/0cd05b9f-6699-46e3-ae36-9f21352e6c8e-kube-api-access-q7g5p\") pod \"cinder-operator-controller-manager-69cf5d4557-c8c6m\" (UID: \"0cd05b9f-6699-46e3-ae36-9f21352e6c8e\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.515650 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsns8\" (UniqueName: \"kubernetes.io/projected/02123851-7d2f-477b-9c60-5a9922a0bc97-kube-api-access-bsns8\") pod \"barbican-operator-controller-manager-7f86f8796f-fgplq\" (UID: \"02123851-7d2f-477b-9c60-5a9922a0bc97\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.524781 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.526639 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.544967 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-g9dr2" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.551444 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.552152 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2lp4\" (UniqueName: \"kubernetes.io/projected/a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85-kube-api-access-c2lp4\") pod \"designate-operator-controller-manager-b45d7bf98-js6db\" (UID: \"a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.552223 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9hs\" (UniqueName: \"kubernetes.io/projected/6b8541f9-a37a-41d6-8006-3d0335c3abb5-kube-api-access-6f9hs\") pod \"horizon-operator-controller-manager-77d5c5b54f-rbzsj\" (UID: \"6b8541f9-a37a-41d6-8006-3d0335c3abb5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.552247 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqcx\" (UniqueName: \"kubernetes.io/projected/555ebb8f-1bc3-4b8d-9f37-cad92b48477c-kube-api-access-5fqcx\") pod \"ironic-operator-controller-manager-598f7747c9-v25h4\" (UID: \"555ebb8f-1bc3-4b8d-9f37-cad92b48477c\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.552285 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw5zz\" (UniqueName: \"kubernetes.io/projected/921e121c-5261-4fe7-8171-6b634babedf4-kube-api-access-hw5zz\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.552314 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np4zv\" (UniqueName: \"kubernetes.io/projected/9a3f9a14-1138-425d-8a56-454b282d7d9f-kube-api-access-np4zv\") pod \"keystone-operator-controller-manager-b8b6d4659-2gzqg\" (UID: \"9a3f9a14-1138-425d-8a56-454b282d7d9f\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.552360 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.552410 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsl2n\" (UniqueName: \"kubernetes.io/projected/5e9cf1cb-c413-45ad-8a51-bf35407fcdfe-kube-api-access-qsl2n\") pod \"heat-operator-controller-manager-594c8c9d5d-p2nr8\" (UID: \"5e9cf1cb-c413-45ad-8a51-bf35407fcdfe\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" Jan 24 00:18:16 crc kubenswrapper[4676]: E0124 00:18:16.552593 4676 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:16 crc kubenswrapper[4676]: E0124 00:18:16.552667 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert podName:921e121c-5261-4fe7-8171-6b634babedf4 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:17.052647037 +0000 UTC m=+881.082618038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert") pod "infra-operator-controller-manager-58749ffdfb-jxx26" (UID: "921e121c-5261-4fe7-8171-6b634babedf4") : secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.565714 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.583102 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92kpk\" (UniqueName: \"kubernetes.io/projected/29e4b64d-19bd-419b-9e29-7a41e6f12ae0-kube-api-access-92kpk\") pod \"glance-operator-controller-manager-78fdd796fd-btxnv\" (UID: \"29e4b64d-19bd-419b-9e29-7a41e6f12ae0\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.585980 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsl2n\" (UniqueName: \"kubernetes.io/projected/5e9cf1cb-c413-45ad-8a51-bf35407fcdfe-kube-api-access-qsl2n\") pod \"heat-operator-controller-manager-594c8c9d5d-p2nr8\" (UID: \"5e9cf1cb-c413-45ad-8a51-bf35407fcdfe\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.589872 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.590566 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.598727 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9hs\" (UniqueName: \"kubernetes.io/projected/6b8541f9-a37a-41d6-8006-3d0335c3abb5-kube-api-access-6f9hs\") pod \"horizon-operator-controller-manager-77d5c5b54f-rbzsj\" (UID: \"6b8541f9-a37a-41d6-8006-3d0335c3abb5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.601544 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.606504 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.608961 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw5zz\" (UniqueName: \"kubernetes.io/projected/921e121c-5261-4fe7-8171-6b634babedf4-kube-api-access-hw5zz\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.619144 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2lp4\" (UniqueName: \"kubernetes.io/projected/a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85-kube-api-access-c2lp4\") pod \"designate-operator-controller-manager-b45d7bf98-js6db\" (UID: \"a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.619901 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-r2mlc" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.653101 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqcx\" (UniqueName: \"kubernetes.io/projected/555ebb8f-1bc3-4b8d-9f37-cad92b48477c-kube-api-access-5fqcx\") pod \"ironic-operator-controller-manager-598f7747c9-v25h4\" (UID: \"555ebb8f-1bc3-4b8d-9f37-cad92b48477c\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.653165 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np4zv\" (UniqueName: \"kubernetes.io/projected/9a3f9a14-1138-425d-8a56-454b282d7d9f-kube-api-access-np4zv\") pod \"keystone-operator-controller-manager-b8b6d4659-2gzqg\" (UID: \"9a3f9a14-1138-425d-8a56-454b282d7d9f\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.653242 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8zz7\" (UniqueName: \"kubernetes.io/projected/dfc79179-d245-4360-be6e-8b43441e23ed-kube-api-access-l8zz7\") pod \"manila-operator-controller-manager-78c6999f6f-g4d8z\" (UID: \"dfc79179-d245-4360-be6e-8b43441e23ed\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.661630 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.667680 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.678160 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.679197 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.696887 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-78hg7" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.700792 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.706140 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.706636 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np4zv\" (UniqueName: \"kubernetes.io/projected/9a3f9a14-1138-425d-8a56-454b282d7d9f-kube-api-access-np4zv\") pod \"keystone-operator-controller-manager-b8b6d4659-2gzqg\" (UID: \"9a3f9a14-1138-425d-8a56-454b282d7d9f\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.707226 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.708908 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.718058 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qc7mt" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.718297 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqcx\" (UniqueName: \"kubernetes.io/projected/555ebb8f-1bc3-4b8d-9f37-cad92b48477c-kube-api-access-5fqcx\") pod \"ironic-operator-controller-manager-598f7747c9-v25h4\" (UID: \"555ebb8f-1bc3-4b8d-9f37-cad92b48477c\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.725843 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.726634 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.734783 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gqgzt" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.735517 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.753885 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.765063 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvgz4\" (UniqueName: \"kubernetes.io/projected/4ce661f6-26e2-4da2-a759-e493a60587b2-kube-api-access-xvgz4\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n\" (UID: \"4ce661f6-26e2-4da2-a759-e493a60587b2\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.765118 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8zz7\" (UniqueName: \"kubernetes.io/projected/dfc79179-d245-4360-be6e-8b43441e23ed-kube-api-access-l8zz7\") pod \"manila-operator-controller-manager-78c6999f6f-g4d8z\" (UID: \"dfc79179-d245-4360-be6e-8b43441e23ed\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.765198 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pmb\" (UniqueName: \"kubernetes.io/projected/df9ab5f0-f577-4303-8045-f960c67a6936-kube-api-access-l8pmb\") pod \"nova-operator-controller-manager-6b8bc8d87d-4xp45\" (UID: \"df9ab5f0-f577-4303-8045-f960c67a6936\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.765263 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22p7t\" (UniqueName: \"kubernetes.io/projected/dd6346d8-9cf1-4364-b480-f4c2d872472f-kube-api-access-22p7t\") pod \"neutron-operator-controller-manager-78d58447c5-wqbcz\" (UID: \"dd6346d8-9cf1-4364-b480-f4c2d872472f\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.767546 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.771647 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.774360 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.775943 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b62h5" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.792904 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.844560 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8zz7\" (UniqueName: \"kubernetes.io/projected/dfc79179-d245-4360-be6e-8b43441e23ed-kube-api-access-l8zz7\") pod \"manila-operator-controller-manager-78c6999f6f-g4d8z\" (UID: \"dfc79179-d245-4360-be6e-8b43441e23ed\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.874059 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22p7t\" (UniqueName: \"kubernetes.io/projected/dd6346d8-9cf1-4364-b480-f4c2d872472f-kube-api-access-22p7t\") pod \"neutron-operator-controller-manager-78d58447c5-wqbcz\" (UID: \"dd6346d8-9cf1-4364-b480-f4c2d872472f\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.874102 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvgz4\" (UniqueName: \"kubernetes.io/projected/4ce661f6-26e2-4da2-a759-e493a60587b2-kube-api-access-xvgz4\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n\" (UID: \"4ce661f6-26e2-4da2-a759-e493a60587b2\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.874161 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pmb\" (UniqueName: \"kubernetes.io/projected/df9ab5f0-f577-4303-8045-f960c67a6936-kube-api-access-l8pmb\") pod \"nova-operator-controller-manager-6b8bc8d87d-4xp45\" (UID: \"df9ab5f0-f577-4303-8045-f960c67a6936\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.874192 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5lfg\" (UniqueName: \"kubernetes.io/projected/ced74bcb-8345-40c5-b2d4-3d369f30b835-kube-api-access-c5lfg\") pod \"octavia-operator-controller-manager-7bd9774b6-k8lw7\" (UID: \"ced74bcb-8345-40c5-b2d4-3d369f30b835\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.875610 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.876295 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.891226 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-dpjc9" Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.932087 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms"] Jan 24 00:18:16 crc kubenswrapper[4676]: I0124 00:18:16.953142 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.014183 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5lfg\" (UniqueName: \"kubernetes.io/projected/ced74bcb-8345-40c5-b2d4-3d369f30b835-kube-api-access-c5lfg\") pod \"octavia-operator-controller-manager-7bd9774b6-k8lw7\" (UID: \"ced74bcb-8345-40c5-b2d4-3d369f30b835\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.026505 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.026938 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rtqkf" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.026596 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.036205 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.036539 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.037058 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22p7t\" (UniqueName: \"kubernetes.io/projected/dd6346d8-9cf1-4364-b480-f4c2d872472f-kube-api-access-22p7t\") pod \"neutron-operator-controller-manager-78d58447c5-wqbcz\" (UID: \"dd6346d8-9cf1-4364-b480-f4c2d872472f\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.037899 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pmb\" (UniqueName: \"kubernetes.io/projected/df9ab5f0-f577-4303-8045-f960c67a6936-kube-api-access-l8pmb\") pod \"nova-operator-controller-manager-6b8bc8d87d-4xp45\" (UID: \"df9ab5f0-f577-4303-8045-f960c67a6936\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.047896 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvgz4\" (UniqueName: \"kubernetes.io/projected/4ce661f6-26e2-4da2-a759-e493a60587b2-kube-api-access-xvgz4\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n\" (UID: \"4ce661f6-26e2-4da2-a759-e493a60587b2\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.061268 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.061801 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqxj\" (UniqueName: \"kubernetes.io/projected/53358678-d763-4b02-a157-86a57ebd0305-kube-api-access-wqqxj\") pod \"ovn-operator-controller-manager-55db956ddc-49nrq\" (UID: \"53358678-d763-4b02-a157-86a57ebd0305\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.077612 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.082576 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.087968 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.105753 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.107720 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kmhkx" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.136835 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.153492 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.166019 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pggf\" (UniqueName: \"kubernetes.io/projected/d85fa79d-818f-4079-aac4-f3fa51a90e9a-kube-api-access-4pggf\") pod \"placement-operator-controller-manager-5d646b7d76-tdqwh\" (UID: \"d85fa79d-818f-4079-aac4-f3fa51a90e9a\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.166060 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.166119 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.166141 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgls7\" (UniqueName: \"kubernetes.io/projected/196f45b9-e656-4760-b058-e0b5c08a50d9-kube-api-access-jgls7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.166183 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqxj\" (UniqueName: \"kubernetes.io/projected/53358678-d763-4b02-a157-86a57ebd0305-kube-api-access-wqqxj\") pod \"ovn-operator-controller-manager-55db956ddc-49nrq\" (UID: \"53358678-d763-4b02-a157-86a57ebd0305\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.166500 4676 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.166556 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert podName:921e121c-5261-4fe7-8171-6b634babedf4 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:18.166540183 +0000 UTC m=+882.196511184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert") pod "infra-operator-controller-manager-58749ffdfb-jxx26" (UID: "921e121c-5261-4fe7-8171-6b634babedf4") : secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.172308 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5lfg\" (UniqueName: \"kubernetes.io/projected/ced74bcb-8345-40c5-b2d4-3d369f30b835-kube-api-access-c5lfg\") pod \"octavia-operator-controller-manager-7bd9774b6-k8lw7\" (UID: \"ced74bcb-8345-40c5-b2d4-3d369f30b835\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.175408 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.198265 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqxj\" (UniqueName: \"kubernetes.io/projected/53358678-d763-4b02-a157-86a57ebd0305-kube-api-access-wqqxj\") pod \"ovn-operator-controller-manager-55db956ddc-49nrq\" (UID: \"53358678-d763-4b02-a157-86a57ebd0305\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.199927 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.200733 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.211097 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-xd5bv" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.242813 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.267474 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5xn\" (UniqueName: \"kubernetes.io/projected/b0c8972b-31d7-40c1-bc65-1478718d41a5-kube-api-access-8j5xn\") pod \"swift-operator-controller-manager-5df95d5965-h8wx9\" (UID: \"b0c8972b-31d7-40c1-bc65-1478718d41a5\") " pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.267531 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pggf\" (UniqueName: \"kubernetes.io/projected/d85fa79d-818f-4079-aac4-f3fa51a90e9a-kube-api-access-4pggf\") pod \"placement-operator-controller-manager-5d646b7d76-tdqwh\" (UID: \"d85fa79d-818f-4079-aac4-f3fa51a90e9a\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.267565 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.267646 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgls7\" (UniqueName: \"kubernetes.io/projected/196f45b9-e656-4760-b058-e0b5c08a50d9-kube-api-access-jgls7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.268056 4676 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.268097 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert podName:196f45b9-e656-4760-b058-e0b5c08a50d9 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:17.768083302 +0000 UTC m=+881.798054303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" (UID: "196f45b9-e656-4760-b058-e0b5c08a50d9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.291525 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.292329 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.293642 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.321873 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-v4559" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.343894 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgls7\" (UniqueName: \"kubernetes.io/projected/196f45b9-e656-4760-b058-e0b5c08a50d9-kube-api-access-jgls7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.364432 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.365276 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.373297 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j5xn\" (UniqueName: \"kubernetes.io/projected/b0c8972b-31d7-40c1-bc65-1478718d41a5-kube-api-access-8j5xn\") pod \"swift-operator-controller-manager-5df95d5965-h8wx9\" (UID: \"b0c8972b-31d7-40c1-bc65-1478718d41a5\") " pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.373405 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvlxx\" (UniqueName: \"kubernetes.io/projected/060e1c8d-dfa6-428f-bffe-d89ac3dab8c3-kube-api-access-qvlxx\") pod \"telemetry-operator-controller-manager-85cd9769bb-4c5zl\" (UID: \"060e1c8d-dfa6-428f-bffe-d89ac3dab8c3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.374944 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hmp8k" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.386991 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pggf\" (UniqueName: \"kubernetes.io/projected/d85fa79d-818f-4079-aac4-f3fa51a90e9a-kube-api-access-4pggf\") pod \"placement-operator-controller-manager-5d646b7d76-tdqwh\" (UID: \"d85fa79d-818f-4079-aac4-f3fa51a90e9a\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.397008 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.430373 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.431300 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.439149 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jkv69" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.441438 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.443029 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.444617 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j5xn\" (UniqueName: \"kubernetes.io/projected/b0c8972b-31d7-40c1-bc65-1478718d41a5-kube-api-access-8j5xn\") pod \"swift-operator-controller-manager-5df95d5965-h8wx9\" (UID: \"b0c8972b-31d7-40c1-bc65-1478718d41a5\") " pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.475864 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvlxx\" (UniqueName: \"kubernetes.io/projected/060e1c8d-dfa6-428f-bffe-d89ac3dab8c3-kube-api-access-qvlxx\") pod \"telemetry-operator-controller-manager-85cd9769bb-4c5zl\" (UID: \"060e1c8d-dfa6-428f-bffe-d89ac3dab8c3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.476045 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xd87\" (UniqueName: \"kubernetes.io/projected/6245b73e-9fba-4ad7-bbbc-31db48c03825-kube-api-access-2xd87\") pod \"test-operator-controller-manager-69797bbcbd-gqg82\" (UID: \"6245b73e-9fba-4ad7-bbbc-31db48c03825\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.481446 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.499958 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.500797 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.501729 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvlxx\" (UniqueName: \"kubernetes.io/projected/060e1c8d-dfa6-428f-bffe-d89ac3dab8c3-kube-api-access-qvlxx\") pod \"telemetry-operator-controller-manager-85cd9769bb-4c5zl\" (UID: \"060e1c8d-dfa6-428f-bffe-d89ac3dab8c3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.510881 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.511046 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2p4mv" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.511186 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.546223 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.576992 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.577041 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xd87\" (UniqueName: \"kubernetes.io/projected/6245b73e-9fba-4ad7-bbbc-31db48c03825-kube-api-access-2xd87\") pod \"test-operator-controller-manager-69797bbcbd-gqg82\" (UID: \"6245b73e-9fba-4ad7-bbbc-31db48c03825\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.577090 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjbfn\" (UniqueName: \"kubernetes.io/projected/9d79c791-c851-4c4a-aa2d-d175b668b0f5-kube-api-access-cjbfn\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.577122 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsrh\" (UniqueName: \"kubernetes.io/projected/ccb1ff12-bef7-4f23-b084-fae32f8202ac-kube-api-access-hmsrh\") pod \"watcher-operator-controller-manager-6d9458688d-h6hzt\" (UID: \"ccb1ff12-bef7-4f23-b084-fae32f8202ac\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.577152 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.582367 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.596079 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xd87\" (UniqueName: \"kubernetes.io/projected/6245b73e-9fba-4ad7-bbbc-31db48c03825-kube-api-access-2xd87\") pod \"test-operator-controller-manager-69797bbcbd-gqg82\" (UID: \"6245b73e-9fba-4ad7-bbbc-31db48c03825\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.624496 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.625552 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.626577 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.644802 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pplpv" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.682743 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.693805 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.693888 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjbfn\" (UniqueName: \"kubernetes.io/projected/9d79c791-c851-4c4a-aa2d-d175b668b0f5-kube-api-access-cjbfn\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.693938 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmsrh\" (UniqueName: \"kubernetes.io/projected/ccb1ff12-bef7-4f23-b084-fae32f8202ac-kube-api-access-hmsrh\") pod \"watcher-operator-controller-manager-6d9458688d-h6hzt\" (UID: \"ccb1ff12-bef7-4f23-b084-fae32f8202ac\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.693995 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.694031 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ffff\" (UniqueName: \"kubernetes.io/projected/fce4c8d0-b903-4873-8c89-2f4b9dd9c05d-kube-api-access-8ffff\") pod \"rabbitmq-cluster-operator-manager-668c99d594-h44qw\" (UID: \"fce4c8d0-b903-4873-8c89-2f4b9dd9c05d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.694195 4676 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.694240 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:18.1942269 +0000 UTC m=+882.224197901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "metrics-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.694719 4676 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.694749 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:18.194742586 +0000 UTC m=+882.224713587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "webhook-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.697704 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.726127 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.732093 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmsrh\" (UniqueName: \"kubernetes.io/projected/ccb1ff12-bef7-4f23-b084-fae32f8202ac-kube-api-access-hmsrh\") pod \"watcher-operator-controller-manager-6d9458688d-h6hzt\" (UID: \"ccb1ff12-bef7-4f23-b084-fae32f8202ac\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.742549 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjbfn\" (UniqueName: \"kubernetes.io/projected/9d79c791-c851-4c4a-aa2d-d175b668b0f5-kube-api-access-cjbfn\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.761519 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.796656 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.796748 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ffff\" (UniqueName: \"kubernetes.io/projected/fce4c8d0-b903-4873-8c89-2f4b9dd9c05d-kube-api-access-8ffff\") pod \"rabbitmq-cluster-operator-manager-668c99d594-h44qw\" (UID: \"fce4c8d0-b903-4873-8c89-2f4b9dd9c05d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.797064 4676 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: E0124 00:18:17.797100 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert podName:196f45b9-e656-4760-b058-e0b5c08a50d9 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:18.79708783 +0000 UTC m=+882.827058831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" (UID: "196f45b9-e656-4760-b058-e0b5c08a50d9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.854106 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ffff\" (UniqueName: \"kubernetes.io/projected/fce4c8d0-b903-4873-8c89-2f4b9dd9c05d-kube-api-access-8ffff\") pod \"rabbitmq-cluster-operator-manager-668c99d594-h44qw\" (UID: \"fce4c8d0-b903-4873-8c89-2f4b9dd9c05d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.876935 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m"] Jan 24 00:18:17 crc kubenswrapper[4676]: I0124 00:18:17.993778 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.105000 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.203412 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.209572 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.209653 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.209681 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.209809 4676 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.209862 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:19.209848665 +0000 UTC m=+883.239819666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "webhook-server-cert" not found Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.210156 4676 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.210180 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:19.210173266 +0000 UTC m=+883.240144257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "metrics-server-cert" not found Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.210213 4676 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.210230 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert podName:921e121c-5261-4fe7-8171-6b634babedf4 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:20.210224347 +0000 UTC m=+884.240195348 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert") pod "infra-operator-controller-manager-58749ffdfb-jxx26" (UID: "921e121c-5261-4fe7-8171-6b634babedf4") : secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.272020 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.384401 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" event={"ID":"02123851-7d2f-477b-9c60-5a9922a0bc97","Type":"ContainerStarted","Data":"e20c527f30f5d5b32ca5949019111e8802f9032a5b2f39192183a1b0f159bcc5"} Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.385173 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" event={"ID":"29e4b64d-19bd-419b-9e29-7a41e6f12ae0","Type":"ContainerStarted","Data":"44e586eb58f980cd9a0ede656a3c8c33e098e8bc63066dbec3a5a469f4584367"} Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.385911 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" event={"ID":"0cd05b9f-6699-46e3-ae36-9f21352e6c8e","Type":"ContainerStarted","Data":"45c0fe6a9ceab2d1ac73fa5eaae6830d04b6cd08af0aa6c202dcd877e271a46c"} Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.386753 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" event={"ID":"6b8541f9-a37a-41d6-8006-3d0335c3abb5","Type":"ContainerStarted","Data":"45eaa4d43cf04521f914a5f95aeda6ec7af6d7e2b2dc5d614004f85e53d73860"} Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.537848 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.574276 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.584074 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.597720 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.604741 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.608576 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.614454 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.617994 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7"] Jan 24 00:18:18 crc kubenswrapper[4676]: W0124 00:18:18.624668 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9f1e2a4_c9fa_4136_aa76_059dc2ed9c85.slice/crio-491d48398e5cb5edcfd79ff249314d3e12da8e20e4a48de40e4ece0c05936d99 WatchSource:0}: Error finding container 491d48398e5cb5edcfd79ff249314d3e12da8e20e4a48de40e4ece0c05936d99: Status 404 returned error can't find the container with id 491d48398e5cb5edcfd79ff249314d3e12da8e20e4a48de40e4ece0c05936d99 Jan 24 00:18:18 crc kubenswrapper[4676]: W0124 00:18:18.643561 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfc79179_d245_4360_be6e_8b43441e23ed.slice/crio-1effa79b6d68858cf3890e790e598a1b0b49afa63e8a1e6465db497b22d7dcef WatchSource:0}: Error finding container 1effa79b6d68858cf3890e790e598a1b0b49afa63e8a1e6465db497b22d7dcef: Status 404 returned error can't find the container with id 1effa79b6d68858cf3890e790e598a1b0b49afa63e8a1e6465db497b22d7dcef Jan 24 00:18:18 crc kubenswrapper[4676]: W0124 00:18:18.650974 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podced74bcb_8345_40c5_b2d4_3d369f30b835.slice/crio-8c0846728bf7c6460610200c6b0a93cf611aa732e7e5e69fb5b0bd2528c44609 WatchSource:0}: Error finding container 8c0846728bf7c6460610200c6b0a93cf611aa732e7e5e69fb5b0bd2528c44609: Status 404 returned error can't find the container with id 8c0846728bf7c6460610200c6b0a93cf611aa732e7e5e69fb5b0bd2528c44609 Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.749391 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.752589 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9"] Jan 24 00:18:18 crc kubenswrapper[4676]: W0124 00:18:18.768820 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0c8972b_31d7_40c1_bc65_1478718d41a5.slice/crio-a078b2e778dd350d037e4f19a131ef8e94c2214c9bdcaa32c53c9fc68b0e9bcd WatchSource:0}: Error finding container a078b2e778dd350d037e4f19a131ef8e94c2214c9bdcaa32c53c9fc68b0e9bcd: Status 404 returned error can't find the container with id a078b2e778dd350d037e4f19a131ef8e94c2214c9bdcaa32c53c9fc68b0e9bcd Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.795467 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45"] Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.800860 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz"] Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.803787 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8pmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-4xp45_openstack-operators(df9ab5f0-f577-4303-8045-f960c67a6936): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.807594 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" podUID="df9ab5f0-f577-4303-8045-f960c67a6936" Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.823477 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.823664 4676 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.823728 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert podName:196f45b9-e656-4760-b058-e0b5c08a50d9 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:20.82370877 +0000 UTC m=+884.853679771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" (UID: "196f45b9-e656-4760-b058-e0b5c08a50d9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.833553 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22p7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-wqbcz_openstack-operators(dd6346d8-9cf1-4364-b480-f4c2d872472f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.836534 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" podUID="dd6346d8-9cf1-4364-b480-f4c2d872472f" Jan 24 00:18:18 crc kubenswrapper[4676]: I0124 00:18:18.844126 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh"] Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.853308 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-tdqwh_openstack-operators(d85fa79d-818f-4079-aac4-f3fa51a90e9a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 00:18:18 crc kubenswrapper[4676]: E0124 00:18:18.856306 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" podUID="d85fa79d-818f-4079-aac4-f3fa51a90e9a" Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.066739 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt"] Jan 24 00:18:19 crc kubenswrapper[4676]: W0124 00:18:19.073704 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccb1ff12_bef7_4f23_b084_fae32f8202ac.slice/crio-b1531e8c152f4f320cfbdfbb33ee66a0f7d7f32bae2533101682bcec0d01b59a WatchSource:0}: Error finding container b1531e8c152f4f320cfbdfbb33ee66a0f7d7f32bae2533101682bcec0d01b59a: Status 404 returned error can't find the container with id b1531e8c152f4f320cfbdfbb33ee66a0f7d7f32bae2533101682bcec0d01b59a Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.080487 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw"] Jan 24 00:18:19 crc kubenswrapper[4676]: W0124 00:18:19.087756 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfce4c8d0_b903_4873_8c89_2f4b9dd9c05d.slice/crio-4eff56239fa4d23e04006207267c571b42ebc5d352c2fd76046ae95a353f124c WatchSource:0}: Error finding container 4eff56239fa4d23e04006207267c571b42ebc5d352c2fd76046ae95a353f124c: Status 404 returned error can't find the container with id 4eff56239fa4d23e04006207267c571b42ebc5d352c2fd76046ae95a353f124c Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.096144 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82"] Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.099008 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8ffff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-h44qw_openstack-operators(fce4c8d0-b903-4873-8c89-2f4b9dd9c05d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.100130 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" podUID="fce4c8d0-b903-4873-8c89-2f4b9dd9c05d" Jan 24 00:18:19 crc kubenswrapper[4676]: W0124 00:18:19.100483 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6245b73e_9fba_4ad7_bbbc_31db48c03825.slice/crio-7731d3601c99b3906aa086320f86fb3b157d3282fd194cb0c802ec8777cf95a8 WatchSource:0}: Error finding container 7731d3601c99b3906aa086320f86fb3b157d3282fd194cb0c802ec8777cf95a8: Status 404 returned error can't find the container with id 7731d3601c99b3906aa086320f86fb3b157d3282fd194cb0c802ec8777cf95a8 Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.102282 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xd87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-gqg82_openstack-operators(6245b73e-9fba-4ad7-bbbc-31db48c03825): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.103610 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" podUID="6245b73e-9fba-4ad7-bbbc-31db48c03825" Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.229238 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.229319 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.229440 4676 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.229454 4676 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.229513 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:21.22950014 +0000 UTC m=+885.259471131 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "metrics-server-cert" not found Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.229528 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:21.22952337 +0000 UTC m=+885.259494371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "webhook-server-cert" not found Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.403369 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" event={"ID":"fce4c8d0-b903-4873-8c89-2f4b9dd9c05d","Type":"ContainerStarted","Data":"4eff56239fa4d23e04006207267c571b42ebc5d352c2fd76046ae95a353f124c"} Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.406073 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" podUID="fce4c8d0-b903-4873-8c89-2f4b9dd9c05d" Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.407560 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" event={"ID":"ced74bcb-8345-40c5-b2d4-3d369f30b835","Type":"ContainerStarted","Data":"8c0846728bf7c6460610200c6b0a93cf611aa732e7e5e69fb5b0bd2528c44609"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.411348 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" event={"ID":"4ce661f6-26e2-4da2-a759-e493a60587b2","Type":"ContainerStarted","Data":"f1039526aec7f470820993e29e55c188bc4ec94d2c8293216741654459984a74"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.422973 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" event={"ID":"dfc79179-d245-4360-be6e-8b43441e23ed","Type":"ContainerStarted","Data":"1effa79b6d68858cf3890e790e598a1b0b49afa63e8a1e6465db497b22d7dcef"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.433497 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" event={"ID":"b0c8972b-31d7-40c1-bc65-1478718d41a5","Type":"ContainerStarted","Data":"a078b2e778dd350d037e4f19a131ef8e94c2214c9bdcaa32c53c9fc68b0e9bcd"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.445651 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" event={"ID":"ccb1ff12-bef7-4f23-b084-fae32f8202ac","Type":"ContainerStarted","Data":"b1531e8c152f4f320cfbdfbb33ee66a0f7d7f32bae2533101682bcec0d01b59a"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.446621 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" event={"ID":"d85fa79d-818f-4079-aac4-f3fa51a90e9a","Type":"ContainerStarted","Data":"4d82172944031bfd3d897c7e4534f048812b4aa5b3bc8fda4f12eaeb9d62c1ff"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.448632 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" event={"ID":"a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85","Type":"ContainerStarted","Data":"491d48398e5cb5edcfd79ff249314d3e12da8e20e4a48de40e4ece0c05936d99"} Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.451783 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" podUID="d85fa79d-818f-4079-aac4-f3fa51a90e9a" Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.452459 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" event={"ID":"df9ab5f0-f577-4303-8045-f960c67a6936","Type":"ContainerStarted","Data":"3ce58638fcbb3d58412b193afdac8082c6db1cc56b208770ffcde92f9729f9eb"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.453432 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" event={"ID":"5e9cf1cb-c413-45ad-8a51-bf35407fcdfe","Type":"ContainerStarted","Data":"11c933af0ebf415f4a79c80ce49d3a9fc82063b843f7764cff971c55914a754b"} Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.456362 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" podUID="df9ab5f0-f577-4303-8045-f960c67a6936" Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.456807 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" event={"ID":"060e1c8d-dfa6-428f-bffe-d89ac3dab8c3","Type":"ContainerStarted","Data":"ff54c3cc814d92ee1cc321d45609ae8b43da463eafa4e73ee76a82ae916f552a"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.464595 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" event={"ID":"6245b73e-9fba-4ad7-bbbc-31db48c03825","Type":"ContainerStarted","Data":"7731d3601c99b3906aa086320f86fb3b157d3282fd194cb0c802ec8777cf95a8"} Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.466860 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" podUID="6245b73e-9fba-4ad7-bbbc-31db48c03825" Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.466915 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" event={"ID":"dd6346d8-9cf1-4364-b480-f4c2d872472f","Type":"ContainerStarted","Data":"0622fb49a57eae1e8e00740432fec781732d4906a697cdd1a79d78bf9f6d18c5"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.469519 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" event={"ID":"555ebb8f-1bc3-4b8d-9f37-cad92b48477c","Type":"ContainerStarted","Data":"70165325ea58298e820a24031f2133bcfafe721afd51512679019a903f8702c2"} Jan 24 00:18:19 crc kubenswrapper[4676]: E0124 00:18:19.470210 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" podUID="dd6346d8-9cf1-4364-b480-f4c2d872472f" Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.470647 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" event={"ID":"9a3f9a14-1138-425d-8a56-454b282d7d9f","Type":"ContainerStarted","Data":"ca37e0e8975c1fe950c6122163b3ae7d6c94476bcd0e2f65d36f111b4a09c40e"} Jan 24 00:18:19 crc kubenswrapper[4676]: I0124 00:18:19.473740 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" event={"ID":"53358678-d763-4b02-a157-86a57ebd0305","Type":"ContainerStarted","Data":"d2f11503b7cc79ec0c2b3bfdaa122f0a43cfe74be5d546eaf9d1087571849c60"} Jan 24 00:18:20 crc kubenswrapper[4676]: I0124 00:18:20.248103 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.248590 4676 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.248647 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert podName:921e121c-5261-4fe7-8171-6b634babedf4 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:24.248628938 +0000 UTC m=+888.278599949 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert") pod "infra-operator-controller-manager-58749ffdfb-jxx26" (UID: "921e121c-5261-4fe7-8171-6b634babedf4") : secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.492578 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" podUID="dd6346d8-9cf1-4364-b480-f4c2d872472f" Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.493206 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" podUID="df9ab5f0-f577-4303-8045-f960c67a6936" Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.493481 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" podUID="d85fa79d-818f-4079-aac4-f3fa51a90e9a" Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.493939 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" podUID="6245b73e-9fba-4ad7-bbbc-31db48c03825" Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.494622 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" podUID="fce4c8d0-b903-4873-8c89-2f4b9dd9c05d" Jan 24 00:18:20 crc kubenswrapper[4676]: I0124 00:18:20.862990 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.863167 4676 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:20 crc kubenswrapper[4676]: E0124 00:18:20.863238 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert podName:196f45b9-e656-4760-b058-e0b5c08a50d9 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:24.863221104 +0000 UTC m=+888.893192105 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" (UID: "196f45b9-e656-4760-b058-e0b5c08a50d9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:21 crc kubenswrapper[4676]: I0124 00:18:21.275950 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:21 crc kubenswrapper[4676]: I0124 00:18:21.276129 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:21 crc kubenswrapper[4676]: E0124 00:18:21.276135 4676 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 00:18:21 crc kubenswrapper[4676]: E0124 00:18:21.276231 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:25.276209336 +0000 UTC m=+889.306180337 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "webhook-server-cert" not found Jan 24 00:18:21 crc kubenswrapper[4676]: E0124 00:18:21.276284 4676 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 00:18:21 crc kubenswrapper[4676]: E0124 00:18:21.276364 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:25.27634521 +0000 UTC m=+889.306316211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "metrics-server-cert" not found Jan 24 00:18:24 crc kubenswrapper[4676]: I0124 00:18:24.335901 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:24 crc kubenswrapper[4676]: E0124 00:18:24.336733 4676 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:24 crc kubenswrapper[4676]: E0124 00:18:24.336785 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert podName:921e121c-5261-4fe7-8171-6b634babedf4 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:32.336767287 +0000 UTC m=+896.366738288 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert") pod "infra-operator-controller-manager-58749ffdfb-jxx26" (UID: "921e121c-5261-4fe7-8171-6b634babedf4") : secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:24 crc kubenswrapper[4676]: I0124 00:18:24.944309 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:24 crc kubenswrapper[4676]: E0124 00:18:24.944577 4676 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:24 crc kubenswrapper[4676]: E0124 00:18:24.944669 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert podName:196f45b9-e656-4760-b058-e0b5c08a50d9 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:32.944645947 +0000 UTC m=+896.974616958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" (UID: "196f45b9-e656-4760-b058-e0b5c08a50d9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:25 crc kubenswrapper[4676]: I0124 00:18:25.351030 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:25 crc kubenswrapper[4676]: I0124 00:18:25.351216 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:25 crc kubenswrapper[4676]: E0124 00:18:25.352009 4676 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 00:18:25 crc kubenswrapper[4676]: E0124 00:18:25.352092 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:33.352070258 +0000 UTC m=+897.382041269 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "webhook-server-cert" not found Jan 24 00:18:25 crc kubenswrapper[4676]: E0124 00:18:25.352492 4676 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 00:18:25 crc kubenswrapper[4676]: E0124 00:18:25.352584 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:33.352560403 +0000 UTC m=+897.382531434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "metrics-server-cert" not found Jan 24 00:18:32 crc kubenswrapper[4676]: I0124 00:18:32.348538 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:32 crc kubenswrapper[4676]: E0124 00:18:32.348753 4676 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:32 crc kubenswrapper[4676]: E0124 00:18:32.349405 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert podName:921e121c-5261-4fe7-8171-6b634babedf4 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:48.349354671 +0000 UTC m=+912.379325712 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert") pod "infra-operator-controller-manager-58749ffdfb-jxx26" (UID: "921e121c-5261-4fe7-8171-6b634babedf4") : secret "infra-operator-webhook-server-cert" not found Jan 24 00:18:32 crc kubenswrapper[4676]: E0124 00:18:32.908811 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551" Jan 24 00:18:32 crc kubenswrapper[4676]: E0124 00:18:32.909629 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hmsrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-h6hzt_openstack-operators(ccb1ff12-bef7-4f23-b084-fae32f8202ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:32 crc kubenswrapper[4676]: E0124 00:18:32.911001 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" podUID="ccb1ff12-bef7-4f23-b084-fae32f8202ac" Jan 24 00:18:32 crc kubenswrapper[4676]: I0124 00:18:32.959892 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:32 crc kubenswrapper[4676]: E0124 00:18:32.960349 4676 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:32 crc kubenswrapper[4676]: E0124 00:18:32.960492 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert podName:196f45b9-e656-4760-b058-e0b5c08a50d9 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:48.96046224 +0000 UTC m=+912.990433281 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" (UID: "196f45b9-e656-4760-b058-e0b5c08a50d9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 00:18:33 crc kubenswrapper[4676]: I0124 00:18:33.366514 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:33 crc kubenswrapper[4676]: I0124 00:18:33.366668 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:33 crc kubenswrapper[4676]: E0124 00:18:33.366897 4676 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 00:18:33 crc kubenswrapper[4676]: E0124 00:18:33.366969 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:49.366946802 +0000 UTC m=+913.396917853 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "metrics-server-cert" not found Jan 24 00:18:33 crc kubenswrapper[4676]: E0124 00:18:33.367683 4676 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 00:18:33 crc kubenswrapper[4676]: E0124 00:18:33.367749 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs podName:9d79c791-c851-4c4a-aa2d-d175b668b0f5 nodeName:}" failed. No retries permitted until 2026-01-24 00:18:49.367730936 +0000 UTC m=+913.397701977 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs") pod "openstack-operator-controller-manager-5d5f8c4f48-md6zs" (UID: "9d79c791-c851-4c4a-aa2d-d175b668b0f5") : secret "webhook-server-cert" not found Jan 24 00:18:33 crc kubenswrapper[4676]: E0124 00:18:33.606418 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" podUID="ccb1ff12-bef7-4f23-b084-fae32f8202ac" Jan 24 00:18:34 crc kubenswrapper[4676]: E0124 00:18:34.022930 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 24 00:18:34 crc kubenswrapper[4676]: E0124 00:18:34.023237 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qsl2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-p2nr8_openstack-operators(5e9cf1cb-c413-45ad-8a51-bf35407fcdfe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:34 crc kubenswrapper[4676]: E0124 00:18:34.024776 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" podUID="5e9cf1cb-c413-45ad-8a51-bf35407fcdfe" Jan 24 00:18:34 crc kubenswrapper[4676]: E0124 00:18:34.611789 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" podUID="5e9cf1cb-c413-45ad-8a51-bf35407fcdfe" Jan 24 00:18:36 crc kubenswrapper[4676]: E0124 00:18:36.491636 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 24 00:18:36 crc kubenswrapper[4676]: E0124 00:18:36.492232 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6f9hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-rbzsj_openstack-operators(6b8541f9-a37a-41d6-8006-3d0335c3abb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:36 crc kubenswrapper[4676]: E0124 00:18:36.493452 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" podUID="6b8541f9-a37a-41d6-8006-3d0335c3abb5" Jan 24 00:18:36 crc kubenswrapper[4676]: E0124 00:18:36.630083 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" podUID="6b8541f9-a37a-41d6-8006-3d0335c3abb5" Jan 24 00:18:40 crc kubenswrapper[4676]: E0124 00:18:40.993710 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 24 00:18:40 crc kubenswrapper[4676]: E0124 00:18:40.994455 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2lp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-js6db_openstack-operators(a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:40 crc kubenswrapper[4676]: E0124 00:18:40.995665 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" podUID="a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85" Jan 24 00:18:41 crc kubenswrapper[4676]: E0124 00:18:41.672167 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" podUID="a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85" Jan 24 00:18:43 crc kubenswrapper[4676]: E0124 00:18:43.948025 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 24 00:18:43 crc kubenswrapper[4676]: E0124 00:18:43.948725 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92kpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-btxnv_openstack-operators(29e4b64d-19bd-419b-9e29-7a41e6f12ae0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:43 crc kubenswrapper[4676]: E0124 00:18:43.949958 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" podUID="29e4b64d-19bd-419b-9e29-7a41e6f12ae0" Jan 24 00:18:44 crc kubenswrapper[4676]: E0124 00:18:44.682047 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" podUID="29e4b64d-19bd-419b-9e29-7a41e6f12ae0" Jan 24 00:18:46 crc kubenswrapper[4676]: E0124 00:18:46.744819 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 24 00:18:46 crc kubenswrapper[4676]: E0124 00:18:46.745352 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5lfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-k8lw7_openstack-operators(ced74bcb-8345-40c5-b2d4-3d369f30b835): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:46 crc kubenswrapper[4676]: E0124 00:18:46.748828 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" podUID="ced74bcb-8345-40c5-b2d4-3d369f30b835" Jan 24 00:18:47 crc kubenswrapper[4676]: E0124 00:18:47.574132 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 24 00:18:47 crc kubenswrapper[4676]: E0124 00:18:47.574332 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fqcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-v25h4_openstack-operators(555ebb8f-1bc3-4b8d-9f37-cad92b48477c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:47 crc kubenswrapper[4676]: E0124 00:18:47.575540 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" podUID="555ebb8f-1bc3-4b8d-9f37-cad92b48477c" Jan 24 00:18:47 crc kubenswrapper[4676]: E0124 00:18:47.700278 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" podUID="555ebb8f-1bc3-4b8d-9f37-cad92b48477c" Jan 24 00:18:47 crc kubenswrapper[4676]: E0124 00:18:47.701570 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" podUID="ced74bcb-8345-40c5-b2d4-3d369f30b835" Jan 24 00:18:48 crc kubenswrapper[4676]: E0124 00:18:48.244066 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 24 00:18:48 crc kubenswrapper[4676]: E0124 00:18:48.244272 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8zz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-g4d8z_openstack-operators(dfc79179-d245-4360-be6e-8b43441e23ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:48 crc kubenswrapper[4676]: E0124 00:18:48.245494 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" podUID="dfc79179-d245-4360-be6e-8b43441e23ed" Jan 24 00:18:48 crc kubenswrapper[4676]: I0124 00:18:48.425144 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:48 crc kubenswrapper[4676]: I0124 00:18:48.430652 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/921e121c-5261-4fe7-8171-6b634babedf4-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jxx26\" (UID: \"921e121c-5261-4fe7-8171-6b634babedf4\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:48 crc kubenswrapper[4676]: I0124 00:18:48.547595 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kcp2q" Jan 24 00:18:48 crc kubenswrapper[4676]: I0124 00:18:48.556661 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:18:48 crc kubenswrapper[4676]: E0124 00:18:48.705499 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" podUID="dfc79179-d245-4360-be6e-8b43441e23ed" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.034399 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.041091 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/196f45b9-e656-4760-b058-e0b5c08a50d9-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms\" (UID: \"196f45b9-e656-4760-b058-e0b5c08a50d9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.340942 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rtqkf" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.350208 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.439901 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.440062 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.445432 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-metrics-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.446873 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79c791-c851-4c4a-aa2d-d175b668b0f5-webhook-certs\") pod \"openstack-operator-controller-manager-5d5f8c4f48-md6zs\" (UID: \"9d79c791-c851-4c4a-aa2d-d175b668b0f5\") " pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.644593 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2p4mv" Jan 24 00:18:49 crc kubenswrapper[4676]: I0124 00:18:49.650686 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:18:51 crc kubenswrapper[4676]: E0124 00:18:51.270428 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 24 00:18:51 crc kubenswrapper[4676]: E0124 00:18:51.272095 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqqxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-49nrq_openstack-operators(53358678-d763-4b02-a157-86a57ebd0305): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:51 crc kubenswrapper[4676]: E0124 00:18:51.273416 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" podUID="53358678-d763-4b02-a157-86a57ebd0305" Jan 24 00:18:51 crc kubenswrapper[4676]: E0124 00:18:51.726544 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" podUID="53358678-d763-4b02-a157-86a57ebd0305" Jan 24 00:18:52 crc kubenswrapper[4676]: E0124 00:18:52.013932 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 24 00:18:52 crc kubenswrapper[4676]: E0124 00:18:52.014122 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xd87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-gqg82_openstack-operators(6245b73e-9fba-4ad7-bbbc-31db48c03825): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:52 crc kubenswrapper[4676]: E0124 00:18:52.015520 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" podUID="6245b73e-9fba-4ad7-bbbc-31db48c03825" Jan 24 00:18:52 crc kubenswrapper[4676]: E0124 00:18:52.595107 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 24 00:18:52 crc kubenswrapper[4676]: E0124 00:18:52.595299 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-np4zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-2gzqg_openstack-operators(9a3f9a14-1138-425d-8a56-454b282d7d9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:52 crc kubenswrapper[4676]: E0124 00:18:52.596824 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" podUID="9a3f9a14-1138-425d-8a56-454b282d7d9f" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.632327 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8rvgb"] Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.635127 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.643637 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8rvgb"] Jan 24 00:18:52 crc kubenswrapper[4676]: E0124 00:18:52.737191 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" podUID="9a3f9a14-1138-425d-8a56-454b282d7d9f" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.792998 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-catalog-content\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.793145 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-utilities\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.793240 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sf2s\" (UniqueName: \"kubernetes.io/projected/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-kube-api-access-4sf2s\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.894118 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sf2s\" (UniqueName: \"kubernetes.io/projected/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-kube-api-access-4sf2s\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.894183 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-catalog-content\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.894223 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-utilities\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.894699 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-utilities\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.894808 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-catalog-content\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.920033 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sf2s\" (UniqueName: \"kubernetes.io/projected/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-kube-api-access-4sf2s\") pod \"community-operators-8rvgb\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:52 crc kubenswrapper[4676]: I0124 00:18:52.963266 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:18:54 crc kubenswrapper[4676]: E0124 00:18:54.634286 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.69:5001/openstack-k8s-operators/swift-operator:23f4c2121362f560ce87ad957773c006d47a5413" Jan 24 00:18:54 crc kubenswrapper[4676]: E0124 00:18:54.634865 4676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.69:5001/openstack-k8s-operators/swift-operator:23f4c2121362f560ce87ad957773c006d47a5413" Jan 24 00:18:54 crc kubenswrapper[4676]: E0124 00:18:54.635026 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.69:5001/openstack-k8s-operators/swift-operator:23f4c2121362f560ce87ad957773c006d47a5413,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8j5xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5df95d5965-h8wx9_openstack-operators(b0c8972b-31d7-40c1-bc65-1478718d41a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:54 crc kubenswrapper[4676]: E0124 00:18:54.636152 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" podUID="b0c8972b-31d7-40c1-bc65-1478718d41a5" Jan 24 00:18:54 crc kubenswrapper[4676]: E0124 00:18:54.746417 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.69:5001/openstack-k8s-operators/swift-operator:23f4c2121362f560ce87ad957773c006d47a5413\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" podUID="b0c8972b-31d7-40c1-bc65-1478718d41a5" Jan 24 00:18:55 crc kubenswrapper[4676]: E0124 00:18:55.588873 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 24 00:18:55 crc kubenswrapper[4676]: E0124 00:18:55.589173 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22p7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-wqbcz_openstack-operators(dd6346d8-9cf1-4364-b480-f4c2d872472f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:55 crc kubenswrapper[4676]: E0124 00:18:55.591074 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" podUID="dd6346d8-9cf1-4364-b480-f4c2d872472f" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.194279 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lbhrz"] Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.196217 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.205632 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbhrz"] Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.376686 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksl7d\" (UniqueName: \"kubernetes.io/projected/6fb6d395-75c7-4f58-85cf-a97ad12bb288-kube-api-access-ksl7d\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.376809 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-utilities\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.376834 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-catalog-content\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.478362 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksl7d\" (UniqueName: \"kubernetes.io/projected/6fb6d395-75c7-4f58-85cf-a97ad12bb288-kube-api-access-ksl7d\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.478538 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-utilities\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.478572 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-catalog-content\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.479315 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-catalog-content\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.479861 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-utilities\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.502349 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksl7d\" (UniqueName: \"kubernetes.io/projected/6fb6d395-75c7-4f58-85cf-a97ad12bb288-kube-api-access-ksl7d\") pod \"redhat-marketplace-lbhrz\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:57 crc kubenswrapper[4676]: I0124 00:18:57.521479 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:18:59 crc kubenswrapper[4676]: E0124 00:18:59.929069 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 24 00:18:59 crc kubenswrapper[4676]: E0124 00:18:59.929530 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-tdqwh_openstack-operators(d85fa79d-818f-4079-aac4-f3fa51a90e9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:18:59 crc kubenswrapper[4676]: E0124 00:18:59.930747 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" podUID="d85fa79d-818f-4079-aac4-f3fa51a90e9a" Jan 24 00:19:00 crc kubenswrapper[4676]: E0124 00:19:00.634208 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 24 00:19:00 crc kubenswrapper[4676]: E0124 00:19:00.634390 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8pmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-4xp45_openstack-operators(df9ab5f0-f577-4303-8045-f960c67a6936): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:19:00 crc kubenswrapper[4676]: E0124 00:19:00.635779 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" podUID="df9ab5f0-f577-4303-8045-f960c67a6936" Jan 24 00:19:01 crc kubenswrapper[4676]: E0124 00:19:01.295891 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 24 00:19:01 crc kubenswrapper[4676]: E0124 00:19:01.296317 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8ffff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-h44qw_openstack-operators(fce4c8d0-b903-4873-8c89-2f4b9dd9c05d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:19:01 crc kubenswrapper[4676]: E0124 00:19:01.297592 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" podUID="fce4c8d0-b903-4873-8c89-2f4b9dd9c05d" Jan 24 00:19:01 crc kubenswrapper[4676]: I0124 00:19:01.785225 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs"] Jan 24 00:19:01 crc kubenswrapper[4676]: I0124 00:19:01.920134 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26"] Jan 24 00:19:01 crc kubenswrapper[4676]: I0124 00:19:01.938663 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms"] Jan 24 00:19:01 crc kubenswrapper[4676]: W0124 00:19:01.986563 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196f45b9_e656_4760_b058_e0b5c08a50d9.slice/crio-c719a5059cec3d49dd2a99d136341fb5121ae1ac6a487e244576edb0fad3b540 WatchSource:0}: Error finding container c719a5059cec3d49dd2a99d136341fb5121ae1ac6a487e244576edb0fad3b540: Status 404 returned error can't find the container with id c719a5059cec3d49dd2a99d136341fb5121ae1ac6a487e244576edb0fad3b540 Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.101917 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8rvgb"] Jan 24 00:19:02 crc kubenswrapper[4676]: W0124 00:19:02.119479 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddccfed4b_e148_4e09_8c96_f5a28fefd6b7.slice/crio-ae7b958b56efd4b4607c5292b0d29fa3ac42c0d3302dd61deaf7d117c5fd353a WatchSource:0}: Error finding container ae7b958b56efd4b4607c5292b0d29fa3ac42c0d3302dd61deaf7d117c5fd353a: Status 404 returned error can't find the container with id ae7b958b56efd4b4607c5292b0d29fa3ac42c0d3302dd61deaf7d117c5fd353a Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.132446 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbhrz"] Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.827448 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" event={"ID":"29e4b64d-19bd-419b-9e29-7a41e6f12ae0","Type":"ContainerStarted","Data":"c94ff148de8e830b54fedc7e8ea23b763f7e15f45a9e67d7d1b8f51192ef4cc5"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.828517 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.829630 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" event={"ID":"5e9cf1cb-c413-45ad-8a51-bf35407fcdfe","Type":"ContainerStarted","Data":"79aed58d1500a29487f2ad99122f300fe0d4be30fe9544de12a9b1664479653a"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.829959 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.831241 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvgb" event={"ID":"dccfed4b-e148-4e09-8c96-f5a28fefd6b7","Type":"ContainerStarted","Data":"ae7b958b56efd4b4607c5292b0d29fa3ac42c0d3302dd61deaf7d117c5fd353a"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.832423 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbhrz" event={"ID":"6fb6d395-75c7-4f58-85cf-a97ad12bb288","Type":"ContainerStarted","Data":"dc8c29c9b6a4f98617ba640622dff3ce5e2bbf9034752fb810b692b938fe4d40"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.833865 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" event={"ID":"a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85","Type":"ContainerStarted","Data":"edd1392f87f77e9c36877306b88f09986ff4d5a5728c278d969a1d3b2dae468a"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.834256 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.835460 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" event={"ID":"9d79c791-c851-4c4a-aa2d-d175b668b0f5","Type":"ContainerStarted","Data":"e03878e2d87db1a4615d65b136e9d1a03b12224c279491cec4309bd6c2754e99"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.835483 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" event={"ID":"9d79c791-c851-4c4a-aa2d-d175b668b0f5","Type":"ContainerStarted","Data":"36b786131833cb15f0894c7ec6c55fca661bb1b7d3c23584010803300759f98a"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.835805 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.836826 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" event={"ID":"6b8541f9-a37a-41d6-8006-3d0335c3abb5","Type":"ContainerStarted","Data":"c27d94b2d74a179ee31054a5427091d0cc5a482a5eba9d918140283913ca1c9c"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.837140 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.838518 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" event={"ID":"ccb1ff12-bef7-4f23-b084-fae32f8202ac","Type":"ContainerStarted","Data":"225be960d85ba3f02a2da8cb559d60253fe2af3074ac37328062999f50d2800c"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.838831 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.840060 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" event={"ID":"555ebb8f-1bc3-4b8d-9f37-cad92b48477c","Type":"ContainerStarted","Data":"5d6d33e8954d6997cc050ec656a057d92cf8476f658755a78fb8d06a91e7ad20"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.840289 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.841576 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" event={"ID":"02123851-7d2f-477b-9c60-5a9922a0bc97","Type":"ContainerStarted","Data":"cc5aac30db056a91920b0f85abbe608ee3a61b6063fd40d80b2dc0f5cffa8ba4"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.841698 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.842320 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" event={"ID":"196f45b9-e656-4760-b058-e0b5c08a50d9","Type":"ContainerStarted","Data":"c719a5059cec3d49dd2a99d136341fb5121ae1ac6a487e244576edb0fad3b540"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.843683 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" event={"ID":"4ce661f6-26e2-4da2-a759-e493a60587b2","Type":"ContainerStarted","Data":"4f123b889d5c8f17006a5118c2e0a95be43d36b1800fd499c154a705f93959c7"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.844096 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.845121 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" event={"ID":"921e121c-5261-4fe7-8171-6b634babedf4","Type":"ContainerStarted","Data":"4cd646ceb9964735b20736b129c5849347664e8cad6dc534ad2a31cae9a4e1c6"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.846209 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" event={"ID":"0cd05b9f-6699-46e3-ae36-9f21352e6c8e","Type":"ContainerStarted","Data":"247343947e75491a04b4468dc003f26c722cb2cf09a2a6f4c36d0077d4814257"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.846557 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.847585 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" event={"ID":"ced74bcb-8345-40c5-b2d4-3d369f30b835","Type":"ContainerStarted","Data":"7857adbd42df4c3bb3e0e71e201ae55278800fb5d39ae8bac371145af179f2b3"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.847920 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.850064 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" event={"ID":"060e1c8d-dfa6-428f-bffe-d89ac3dab8c3","Type":"ContainerStarted","Data":"8c070fc6f5bbc004f03ef8f9daf5e548d66db0e1aaf71692113d4beb7502ea1e"} Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.850368 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.923305 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" podStartSLOduration=45.923289679 podStartE2EDuration="45.923289679s" podCreationTimestamp="2026-01-24 00:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:19:02.918956236 +0000 UTC m=+926.948927237" watchObservedRunningTime="2026-01-24 00:19:02.923289679 +0000 UTC m=+926.953260680" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.924483 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" podStartSLOduration=3.822036786 podStartE2EDuration="46.924475656s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.291600356 +0000 UTC m=+882.321571367" lastFinishedPulling="2026-01-24 00:19:01.394039236 +0000 UTC m=+925.424010237" observedRunningTime="2026-01-24 00:19:02.8636002 +0000 UTC m=+926.893571201" watchObservedRunningTime="2026-01-24 00:19:02.924475656 +0000 UTC m=+926.954446657" Jan 24 00:19:02 crc kubenswrapper[4676]: I0124 00:19:02.952207 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" podStartSLOduration=4.313833147 podStartE2EDuration="46.952187331s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.650724236 +0000 UTC m=+882.680695237" lastFinishedPulling="2026-01-24 00:19:01.28907842 +0000 UTC m=+925.319049421" observedRunningTime="2026-01-24 00:19:02.947918559 +0000 UTC m=+926.977889560" watchObservedRunningTime="2026-01-24 00:19:02.952187331 +0000 UTC m=+926.982158332" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.117870 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" podStartSLOduration=4.578457924 podStartE2EDuration="46.117852998s" podCreationTimestamp="2026-01-24 00:18:17 +0000 UTC" firstStartedPulling="2026-01-24 00:18:19.081050953 +0000 UTC m=+883.111021954" lastFinishedPulling="2026-01-24 00:19:00.620446017 +0000 UTC m=+924.650417028" observedRunningTime="2026-01-24 00:19:03.056827187 +0000 UTC m=+927.086798188" watchObservedRunningTime="2026-01-24 00:19:03.117852998 +0000 UTC m=+927.147823999" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.176690 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" podStartSLOduration=4.407043971 podStartE2EDuration="47.176671961s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.626756128 +0000 UTC m=+882.656727129" lastFinishedPulling="2026-01-24 00:19:01.396384118 +0000 UTC m=+925.426355119" observedRunningTime="2026-01-24 00:19:03.119315993 +0000 UTC m=+927.149286994" watchObservedRunningTime="2026-01-24 00:19:03.176671961 +0000 UTC m=+927.206642962" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.209624 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" podStartSLOduration=4.811066826 podStartE2EDuration="47.209606717s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.223138455 +0000 UTC m=+882.253109456" lastFinishedPulling="2026-01-24 00:19:00.621678336 +0000 UTC m=+924.651649347" observedRunningTime="2026-01-24 00:19:03.180880671 +0000 UTC m=+927.210851672" watchObservedRunningTime="2026-01-24 00:19:03.209606717 +0000 UTC m=+927.239577718" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.212228 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" podStartSLOduration=4.286507776 podStartE2EDuration="47.212222587s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.653968997 +0000 UTC m=+882.683939998" lastFinishedPulling="2026-01-24 00:19:01.579683808 +0000 UTC m=+925.609654809" observedRunningTime="2026-01-24 00:19:03.208494383 +0000 UTC m=+927.238465384" watchObservedRunningTime="2026-01-24 00:19:03.212222587 +0000 UTC m=+927.242193588" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.243644 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" podStartSLOduration=4.291225612 podStartE2EDuration="47.243630456s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.616685878 +0000 UTC m=+882.646656879" lastFinishedPulling="2026-01-24 00:19:01.569090722 +0000 UTC m=+925.599061723" observedRunningTime="2026-01-24 00:19:03.240353684 +0000 UTC m=+927.270324685" watchObservedRunningTime="2026-01-24 00:19:03.243630456 +0000 UTC m=+927.273601457" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.258598 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.304437 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" podStartSLOduration=13.269747103 podStartE2EDuration="47.304423619s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.546906077 +0000 UTC m=+882.576877078" lastFinishedPulling="2026-01-24 00:18:52.581582593 +0000 UTC m=+916.611553594" observedRunningTime="2026-01-24 00:19:03.301219151 +0000 UTC m=+927.331190152" watchObservedRunningTime="2026-01-24 00:19:03.304423619 +0000 UTC m=+927.334394620" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.343146 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" podStartSLOduration=12.530652975 podStartE2EDuration="46.343129232s" podCreationTimestamp="2026-01-24 00:18:17 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.77049988 +0000 UTC m=+882.800470871" lastFinishedPulling="2026-01-24 00:18:52.582976127 +0000 UTC m=+916.612947128" observedRunningTime="2026-01-24 00:19:03.335503628 +0000 UTC m=+927.365474629" watchObservedRunningTime="2026-01-24 00:19:03.343129232 +0000 UTC m=+927.373100233" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.377230 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" podStartSLOduration=12.728408723 podStartE2EDuration="47.377215533s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:17.933724453 +0000 UTC m=+881.963695444" lastFinishedPulling="2026-01-24 00:18:52.582531253 +0000 UTC m=+916.612502254" observedRunningTime="2026-01-24 00:19:03.373954773 +0000 UTC m=+927.403925774" watchObservedRunningTime="2026-01-24 00:19:03.377215533 +0000 UTC m=+927.407186534" Jan 24 00:19:03 crc kubenswrapper[4676]: I0124 00:19:03.414873 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" podStartSLOduration=17.280900119000002 podStartE2EDuration="47.414857184s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.092035003 +0000 UTC m=+882.122006004" lastFinishedPulling="2026-01-24 00:18:48.225992068 +0000 UTC m=+912.255963069" observedRunningTime="2026-01-24 00:19:03.411760099 +0000 UTC m=+927.441731100" watchObservedRunningTime="2026-01-24 00:19:03.414857184 +0000 UTC m=+927.444828185" Jan 24 00:19:04 crc kubenswrapper[4676]: I0124 00:19:04.870175 4676 generic.go:334] "Generic (PLEG): container finished" podID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerID="8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943" exitCode=0 Jan 24 00:19:04 crc kubenswrapper[4676]: I0124 00:19:04.870339 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvgb" event={"ID":"dccfed4b-e148-4e09-8c96-f5a28fefd6b7","Type":"ContainerDied","Data":"8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943"} Jan 24 00:19:05 crc kubenswrapper[4676]: E0124 00:19:05.267310 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" podUID="6245b73e-9fba-4ad7-bbbc-31db48c03825" Jan 24 00:19:05 crc kubenswrapper[4676]: I0124 00:19:05.884081 4676 generic.go:334] "Generic (PLEG): container finished" podID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerID="fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49" exitCode=0 Jan 24 00:19:05 crc kubenswrapper[4676]: I0124 00:19:05.884452 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbhrz" event={"ID":"6fb6d395-75c7-4f58-85cf-a97ad12bb288","Type":"ContainerDied","Data":"fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49"} Jan 24 00:19:05 crc kubenswrapper[4676]: I0124 00:19:05.886971 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" event={"ID":"9a3f9a14-1138-425d-8a56-454b282d7d9f","Type":"ContainerStarted","Data":"78680acd5bedfe77b85e6fb17f9b64dce6f8a3e4764ff9f094cfecf560d8e821"} Jan 24 00:19:05 crc kubenswrapper[4676]: I0124 00:19:05.887615 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" Jan 24 00:19:05 crc kubenswrapper[4676]: I0124 00:19:05.902851 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" event={"ID":"dfc79179-d245-4360-be6e-8b43441e23ed","Type":"ContainerStarted","Data":"3e3dcb284379f3f1783412650f16694ff890f9172e38e5b94ec318f1d6c89e9c"} Jan 24 00:19:05 crc kubenswrapper[4676]: I0124 00:19:05.903312 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" Jan 24 00:19:05 crc kubenswrapper[4676]: I0124 00:19:05.917289 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" podStartSLOduration=3.2913077250000002 podStartE2EDuration="49.917268499s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.647193508 +0000 UTC m=+882.677164509" lastFinishedPulling="2026-01-24 00:19:05.273154282 +0000 UTC m=+929.303125283" observedRunningTime="2026-01-24 00:19:05.912815142 +0000 UTC m=+929.942786143" watchObservedRunningTime="2026-01-24 00:19:05.917268499 +0000 UTC m=+929.947239490" Jan 24 00:19:05 crc kubenswrapper[4676]: I0124 00:19:05.941476 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" podStartSLOduration=3.350687125 podStartE2EDuration="49.941461115s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.674632504 +0000 UTC m=+882.704603505" lastFinishedPulling="2026-01-24 00:19:05.265406484 +0000 UTC m=+929.295377495" observedRunningTime="2026-01-24 00:19:05.93706612 +0000 UTC m=+929.967037111" watchObservedRunningTime="2026-01-24 00:19:05.941461115 +0000 UTC m=+929.971432116" Jan 24 00:19:06 crc kubenswrapper[4676]: I0124 00:19:06.592427 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-c8c6m" Jan 24 00:19:06 crc kubenswrapper[4676]: I0124 00:19:06.670167 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-js6db" Jan 24 00:19:06 crc kubenswrapper[4676]: I0124 00:19:06.704874 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-p2nr8" Jan 24 00:19:06 crc kubenswrapper[4676]: I0124 00:19:06.777521 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-rbzsj" Jan 24 00:19:06 crc kubenswrapper[4676]: I0124 00:19:06.915490 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" event={"ID":"53358678-d763-4b02-a157-86a57ebd0305","Type":"ContainerStarted","Data":"949f9736c183d87736871cec1ab379c52ad00689b92be4b99d7a8a69463cb60a"} Jan 24 00:19:06 crc kubenswrapper[4676]: I0124 00:19:06.915876 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" Jan 24 00:19:06 crc kubenswrapper[4676]: I0124 00:19:06.932952 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" podStartSLOduration=3.8781462060000003 podStartE2EDuration="50.93293627s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.647362124 +0000 UTC m=+882.677333125" lastFinishedPulling="2026-01-24 00:19:05.702152188 +0000 UTC m=+929.732123189" observedRunningTime="2026-01-24 00:19:06.931264349 +0000 UTC m=+930.961235350" watchObservedRunningTime="2026-01-24 00:19:06.93293627 +0000 UTC m=+930.962907271" Jan 24 00:19:07 crc kubenswrapper[4676]: I0124 00:19:07.055061 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-v25h4" Jan 24 00:19:07 crc kubenswrapper[4676]: I0124 00:19:07.085927 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n" Jan 24 00:19:07 crc kubenswrapper[4676]: I0124 00:19:07.444660 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-k8lw7" Jan 24 00:19:07 crc kubenswrapper[4676]: I0124 00:19:07.704043 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4c5zl" Jan 24 00:19:07 crc kubenswrapper[4676]: I0124 00:19:07.767027 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-h6hzt" Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.927402 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" event={"ID":"b0c8972b-31d7-40c1-bc65-1478718d41a5","Type":"ContainerStarted","Data":"abfef511ba5d3692f30535b0933f2886ff27a0633dd8cc09c8c3f685ca22fbc8"} Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.927976 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.931355 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" event={"ID":"921e121c-5261-4fe7-8171-6b634babedf4","Type":"ContainerStarted","Data":"a93d80bd6a6ba10512cd156d0877df7f548f4aabc39afaf8b53c2a37fa95c23d"} Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.931645 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.933991 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvgb" event={"ID":"dccfed4b-e148-4e09-8c96-f5a28fefd6b7","Type":"ContainerStarted","Data":"d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff"} Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.936034 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbhrz" event={"ID":"6fb6d395-75c7-4f58-85cf-a97ad12bb288","Type":"ContainerStarted","Data":"09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04"} Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.937512 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" event={"ID":"196f45b9-e656-4760-b058-e0b5c08a50d9","Type":"ContainerStarted","Data":"37cb2962b3de94522bfdf9940464761e018897a697a1edfbd7e0ae5fdfb20c22"} Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.937891 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.947087 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" podStartSLOduration=3.167566952 podStartE2EDuration="52.947070033s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.7792613 +0000 UTC m=+882.809232301" lastFinishedPulling="2026-01-24 00:19:08.558764341 +0000 UTC m=+932.588735382" observedRunningTime="2026-01-24 00:19:08.943236854 +0000 UTC m=+932.973207865" watchObservedRunningTime="2026-01-24 00:19:08.947070033 +0000 UTC m=+932.977041034" Jan 24 00:19:08 crc kubenswrapper[4676]: I0124 00:19:08.990482 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" podStartSLOduration=46.486801013 podStartE2EDuration="52.99046723s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:19:01.99492693 +0000 UTC m=+926.024897931" lastFinishedPulling="2026-01-24 00:19:08.498593137 +0000 UTC m=+932.528564148" observedRunningTime="2026-01-24 00:19:08.983130314 +0000 UTC m=+933.013101315" watchObservedRunningTime="2026-01-24 00:19:08.99046723 +0000 UTC m=+933.020438221" Jan 24 00:19:09 crc kubenswrapper[4676]: I0124 00:19:09.008811 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" podStartSLOduration=46.480445398 podStartE2EDuration="53.008792066s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:19:01.964409519 +0000 UTC m=+925.994380520" lastFinishedPulling="2026-01-24 00:19:08.492756177 +0000 UTC m=+932.522727188" observedRunningTime="2026-01-24 00:19:08.999278632 +0000 UTC m=+933.029249623" watchObservedRunningTime="2026-01-24 00:19:09.008792066 +0000 UTC m=+933.038763067" Jan 24 00:19:09 crc kubenswrapper[4676]: I0124 00:19:09.363833 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:19:09 crc kubenswrapper[4676]: I0124 00:19:09.363887 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:19:09 crc kubenswrapper[4676]: I0124 00:19:09.655741 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5d5f8c4f48-md6zs" Jan 24 00:19:10 crc kubenswrapper[4676]: E0124 00:19:10.256690 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" podUID="dd6346d8-9cf1-4364-b480-f4c2d872472f" Jan 24 00:19:11 crc kubenswrapper[4676]: I0124 00:19:11.959011 4676 generic.go:334] "Generic (PLEG): container finished" podID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerID="d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff" exitCode=0 Jan 24 00:19:11 crc kubenswrapper[4676]: I0124 00:19:11.959109 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvgb" event={"ID":"dccfed4b-e148-4e09-8c96-f5a28fefd6b7","Type":"ContainerDied","Data":"d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff"} Jan 24 00:19:11 crc kubenswrapper[4676]: I0124 00:19:11.963431 4676 generic.go:334] "Generic (PLEG): container finished" podID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerID="09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04" exitCode=0 Jan 24 00:19:11 crc kubenswrapper[4676]: I0124 00:19:11.963503 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbhrz" event={"ID":"6fb6d395-75c7-4f58-85cf-a97ad12bb288","Type":"ContainerDied","Data":"09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04"} Jan 24 00:19:12 crc kubenswrapper[4676]: E0124 00:19:12.257054 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" podUID="df9ab5f0-f577-4303-8045-f960c67a6936" Jan 24 00:19:12 crc kubenswrapper[4676]: E0124 00:19:12.257474 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" podUID="fce4c8d0-b903-4873-8c89-2f4b9dd9c05d" Jan 24 00:19:13 crc kubenswrapper[4676]: I0124 00:19:13.983245 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvgb" event={"ID":"dccfed4b-e148-4e09-8c96-f5a28fefd6b7","Type":"ContainerStarted","Data":"055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1"} Jan 24 00:19:13 crc kubenswrapper[4676]: I0124 00:19:13.985436 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbhrz" event={"ID":"6fb6d395-75c7-4f58-85cf-a97ad12bb288","Type":"ContainerStarted","Data":"a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea"} Jan 24 00:19:14 crc kubenswrapper[4676]: I0124 00:19:14.034500 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8rvgb" podStartSLOduration=15.113324433 podStartE2EDuration="22.034487149s" podCreationTimestamp="2026-01-24 00:18:52 +0000 UTC" firstStartedPulling="2026-01-24 00:19:05.904332531 +0000 UTC m=+929.934303532" lastFinishedPulling="2026-01-24 00:19:12.825495247 +0000 UTC m=+936.855466248" observedRunningTime="2026-01-24 00:19:14.033719025 +0000 UTC m=+938.063690026" watchObservedRunningTime="2026-01-24 00:19:14.034487149 +0000 UTC m=+938.064458150" Jan 24 00:19:14 crc kubenswrapper[4676]: I0124 00:19:14.076975 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lbhrz" podStartSLOduration=10.150678542 podStartE2EDuration="17.076957487s" podCreationTimestamp="2026-01-24 00:18:57 +0000 UTC" firstStartedPulling="2026-01-24 00:19:05.885496299 +0000 UTC m=+929.915467300" lastFinishedPulling="2026-01-24 00:19:12.811775244 +0000 UTC m=+936.841746245" observedRunningTime="2026-01-24 00:19:14.071934943 +0000 UTC m=+938.101905954" watchObservedRunningTime="2026-01-24 00:19:14.076957487 +0000 UTC m=+938.106928488" Jan 24 00:19:15 crc kubenswrapper[4676]: E0124 00:19:15.257623 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" podUID="d85fa79d-818f-4079-aac4-f3fa51a90e9a" Jan 24 00:19:16 crc kubenswrapper[4676]: I0124 00:19:16.569553 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fgplq" Jan 24 00:19:16 crc kubenswrapper[4676]: I0124 00:19:16.664648 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-btxnv" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.006323 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" event={"ID":"6245b73e-9fba-4ad7-bbbc-31db48c03825","Type":"ContainerStarted","Data":"25b8ffc3686fc2ad6e94b92751d7a382f2ddf94fa692fbc5f2bdc927b0241a35"} Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.007490 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.026105 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" podStartSLOduration=2.416502544 podStartE2EDuration="1m0.026081944s" podCreationTimestamp="2026-01-24 00:18:17 +0000 UTC" firstStartedPulling="2026-01-24 00:18:19.102109652 +0000 UTC m=+883.132080653" lastFinishedPulling="2026-01-24 00:19:16.711689052 +0000 UTC m=+940.741660053" observedRunningTime="2026-01-24 00:19:17.022614617 +0000 UTC m=+941.052585628" watchObservedRunningTime="2026-01-24 00:19:17.026081944 +0000 UTC m=+941.056052955" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.030623 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2gzqg" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.042959 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g4d8z" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.296584 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-49nrq" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.522401 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.522445 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.587509 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:19:17 crc kubenswrapper[4676]: I0124 00:19:17.633433 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5df95d5965-h8wx9" Jan 24 00:19:18 crc kubenswrapper[4676]: I0124 00:19:18.089564 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:19:18 crc kubenswrapper[4676]: I0124 00:19:18.136426 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbhrz"] Jan 24 00:19:18 crc kubenswrapper[4676]: I0124 00:19:18.563724 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jxx26" Jan 24 00:19:19 crc kubenswrapper[4676]: I0124 00:19:19.358145 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms" Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.028252 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lbhrz" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerName="registry-server" containerID="cri-o://a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea" gracePeriod=2 Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.456251 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.612612 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-utilities\") pod \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.612693 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksl7d\" (UniqueName: \"kubernetes.io/projected/6fb6d395-75c7-4f58-85cf-a97ad12bb288-kube-api-access-ksl7d\") pod \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.612763 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-catalog-content\") pod \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\" (UID: \"6fb6d395-75c7-4f58-85cf-a97ad12bb288\") " Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.613557 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-utilities" (OuterVolumeSpecName: "utilities") pod "6fb6d395-75c7-4f58-85cf-a97ad12bb288" (UID: "6fb6d395-75c7-4f58-85cf-a97ad12bb288"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.621675 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb6d395-75c7-4f58-85cf-a97ad12bb288-kube-api-access-ksl7d" (OuterVolumeSpecName: "kube-api-access-ksl7d") pod "6fb6d395-75c7-4f58-85cf-a97ad12bb288" (UID: "6fb6d395-75c7-4f58-85cf-a97ad12bb288"). InnerVolumeSpecName "kube-api-access-ksl7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.638274 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fb6d395-75c7-4f58-85cf-a97ad12bb288" (UID: "6fb6d395-75c7-4f58-85cf-a97ad12bb288"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.714940 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.714973 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksl7d\" (UniqueName: \"kubernetes.io/projected/6fb6d395-75c7-4f58-85cf-a97ad12bb288-kube-api-access-ksl7d\") on node \"crc\" DevicePath \"\"" Jan 24 00:19:20 crc kubenswrapper[4676]: I0124 00:19:20.714984 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb6d395-75c7-4f58-85cf-a97ad12bb288-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.036827 4676 generic.go:334] "Generic (PLEG): container finished" podID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerID="a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea" exitCode=0 Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.036871 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbhrz" event={"ID":"6fb6d395-75c7-4f58-85cf-a97ad12bb288","Type":"ContainerDied","Data":"a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea"} Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.036909 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbhrz" event={"ID":"6fb6d395-75c7-4f58-85cf-a97ad12bb288","Type":"ContainerDied","Data":"dc8c29c9b6a4f98617ba640622dff3ce5e2bbf9034752fb810b692b938fe4d40"} Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.036915 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbhrz" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.036929 4676 scope.go:117] "RemoveContainer" containerID="a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.069759 4676 scope.go:117] "RemoveContainer" containerID="09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.076087 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbhrz"] Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.079387 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbhrz"] Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.086293 4676 scope.go:117] "RemoveContainer" containerID="fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.105515 4676 scope.go:117] "RemoveContainer" containerID="a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea" Jan 24 00:19:21 crc kubenswrapper[4676]: E0124 00:19:21.105860 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea\": container with ID starting with a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea not found: ID does not exist" containerID="a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.105892 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea"} err="failed to get container status \"a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea\": rpc error: code = NotFound desc = could not find container \"a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea\": container with ID starting with a0f2cba8fa77e70a4bfcebc1b00b9e73f519a315a91e3e4d1f805b1d5c9267ea not found: ID does not exist" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.105911 4676 scope.go:117] "RemoveContainer" containerID="09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04" Jan 24 00:19:21 crc kubenswrapper[4676]: E0124 00:19:21.106134 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04\": container with ID starting with 09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04 not found: ID does not exist" containerID="09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.106153 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04"} err="failed to get container status \"09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04\": rpc error: code = NotFound desc = could not find container \"09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04\": container with ID starting with 09d568c70998bca76872972662abe01d907ca7c4e29651c8d5b4e3e18f5a2f04 not found: ID does not exist" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.106165 4676 scope.go:117] "RemoveContainer" containerID="fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49" Jan 24 00:19:21 crc kubenswrapper[4676]: E0124 00:19:21.106320 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49\": container with ID starting with fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49 not found: ID does not exist" containerID="fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49" Jan 24 00:19:21 crc kubenswrapper[4676]: I0124 00:19:21.106340 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49"} err="failed to get container status \"fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49\": rpc error: code = NotFound desc = could not find container \"fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49\": container with ID starting with fad1a55313f087cfbdd86fc92712061e080059764bba6dea18e37b485e2e6d49 not found: ID does not exist" Jan 24 00:19:22 crc kubenswrapper[4676]: I0124 00:19:22.263811 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" path="/var/lib/kubelet/pods/6fb6d395-75c7-4f58-85cf-a97ad12bb288/volumes" Jan 24 00:19:22 crc kubenswrapper[4676]: I0124 00:19:22.963992 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:19:22 crc kubenswrapper[4676]: I0124 00:19:22.964061 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:19:23 crc kubenswrapper[4676]: I0124 00:19:23.024099 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:19:23 crc kubenswrapper[4676]: I0124 00:19:23.103163 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:19:24 crc kubenswrapper[4676]: I0124 00:19:24.221125 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8rvgb"] Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.070671 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" event={"ID":"df9ab5f0-f577-4303-8045-f960c67a6936","Type":"ContainerStarted","Data":"ad34eaa169d9abea0b138a3ceff2ef435f1c27d3e02a3f2fb45def6ff0c21ca3"} Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.070898 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8rvgb" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerName="registry-server" containerID="cri-o://055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1" gracePeriod=2 Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.071438 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.105987 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" podStartSLOduration=3.057855358 podStartE2EDuration="1m9.105969583s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.803648501 +0000 UTC m=+882.833619502" lastFinishedPulling="2026-01-24 00:19:24.851762706 +0000 UTC m=+948.881733727" observedRunningTime="2026-01-24 00:19:25.104300701 +0000 UTC m=+949.134271712" watchObservedRunningTime="2026-01-24 00:19:25.105969583 +0000 UTC m=+949.135940594" Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.456780 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.592708 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-catalog-content\") pod \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.592802 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sf2s\" (UniqueName: \"kubernetes.io/projected/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-kube-api-access-4sf2s\") pod \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.592911 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-utilities\") pod \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\" (UID: \"dccfed4b-e148-4e09-8c96-f5a28fefd6b7\") " Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.593936 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-utilities" (OuterVolumeSpecName: "utilities") pod "dccfed4b-e148-4e09-8c96-f5a28fefd6b7" (UID: "dccfed4b-e148-4e09-8c96-f5a28fefd6b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.599170 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-kube-api-access-4sf2s" (OuterVolumeSpecName: "kube-api-access-4sf2s") pod "dccfed4b-e148-4e09-8c96-f5a28fefd6b7" (UID: "dccfed4b-e148-4e09-8c96-f5a28fefd6b7"). InnerVolumeSpecName "kube-api-access-4sf2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.641919 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dccfed4b-e148-4e09-8c96-f5a28fefd6b7" (UID: "dccfed4b-e148-4e09-8c96-f5a28fefd6b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.694254 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.694290 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:19:25 crc kubenswrapper[4676]: I0124 00:19:25.694304 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sf2s\" (UniqueName: \"kubernetes.io/projected/dccfed4b-e148-4e09-8c96-f5a28fefd6b7-kube-api-access-4sf2s\") on node \"crc\" DevicePath \"\"" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.078601 4676 generic.go:334] "Generic (PLEG): container finished" podID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerID="055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1" exitCode=0 Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.078663 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rvgb" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.078662 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvgb" event={"ID":"dccfed4b-e148-4e09-8c96-f5a28fefd6b7","Type":"ContainerDied","Data":"055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1"} Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.079006 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rvgb" event={"ID":"dccfed4b-e148-4e09-8c96-f5a28fefd6b7","Type":"ContainerDied","Data":"ae7b958b56efd4b4607c5292b0d29fa3ac42c0d3302dd61deaf7d117c5fd353a"} Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.079033 4676 scope.go:117] "RemoveContainer" containerID="055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.081534 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" event={"ID":"dd6346d8-9cf1-4364-b480-f4c2d872472f","Type":"ContainerStarted","Data":"766e9849778c46a698401986487d4e96a72a15c35d3b58bfc59ed914aad23d0a"} Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.081924 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.095314 4676 scope.go:117] "RemoveContainer" containerID="d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.124900 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" podStartSLOduration=3.181729177 podStartE2EDuration="1m10.124877894s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.833405548 +0000 UTC m=+882.863376539" lastFinishedPulling="2026-01-24 00:19:25.776554255 +0000 UTC m=+949.806525256" observedRunningTime="2026-01-24 00:19:26.11988738 +0000 UTC m=+950.149858391" watchObservedRunningTime="2026-01-24 00:19:26.124877894 +0000 UTC m=+950.154848895" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.129995 4676 scope.go:117] "RemoveContainer" containerID="8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.137352 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8rvgb"] Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.143343 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8rvgb"] Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.167986 4676 scope.go:117] "RemoveContainer" containerID="055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1" Jan 24 00:19:26 crc kubenswrapper[4676]: E0124 00:19:26.168708 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1\": container with ID starting with 055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1 not found: ID does not exist" containerID="055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.168738 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1"} err="failed to get container status \"055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1\": rpc error: code = NotFound desc = could not find container \"055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1\": container with ID starting with 055fb9a70848ca751070c86e921c80b30bcb2f19ac12925601a33d8c6cc09ff1 not found: ID does not exist" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.168758 4676 scope.go:117] "RemoveContainer" containerID="d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff" Jan 24 00:19:26 crc kubenswrapper[4676]: E0124 00:19:26.169461 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff\": container with ID starting with d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff not found: ID does not exist" containerID="d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.169487 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff"} err="failed to get container status \"d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff\": rpc error: code = NotFound desc = could not find container \"d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff\": container with ID starting with d78da559e910d3a887dafee8ce3b16fd2ae5d6fe6bbaf01fbe69c663c5b3dcff not found: ID does not exist" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.169502 4676 scope.go:117] "RemoveContainer" containerID="8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943" Jan 24 00:19:26 crc kubenswrapper[4676]: E0124 00:19:26.171215 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943\": container with ID starting with 8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943 not found: ID does not exist" containerID="8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.171236 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943"} err="failed to get container status \"8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943\": rpc error: code = NotFound desc = could not find container \"8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943\": container with ID starting with 8860288802f8353ee23e76485e0052d8eb477114812b0d068faa208bb24aa943 not found: ID does not exist" Jan 24 00:19:26 crc kubenswrapper[4676]: I0124 00:19:26.262084 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" path="/var/lib/kubelet/pods/dccfed4b-e148-4e09-8c96-f5a28fefd6b7/volumes" Jan 24 00:19:27 crc kubenswrapper[4676]: I0124 00:19:27.730644 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gqg82" Jan 24 00:19:29 crc kubenswrapper[4676]: I0124 00:19:29.103531 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" event={"ID":"d85fa79d-818f-4079-aac4-f3fa51a90e9a","Type":"ContainerStarted","Data":"137a6cfdf47ebb609e2b69b5378d7a09fae72d340a63b2433801e97509aeda1c"} Jan 24 00:19:29 crc kubenswrapper[4676]: I0124 00:19:29.104068 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" Jan 24 00:19:29 crc kubenswrapper[4676]: I0124 00:19:29.110539 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" event={"ID":"fce4c8d0-b903-4873-8c89-2f4b9dd9c05d","Type":"ContainerStarted","Data":"9050915a23f402f2f8cd922c2c90d2893c34a808c090b0f7343b1dfffae6d889"} Jan 24 00:19:29 crc kubenswrapper[4676]: I0124 00:19:29.131696 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" podStartSLOduration=3.967131079 podStartE2EDuration="1m13.131678048s" podCreationTimestamp="2026-01-24 00:18:16 +0000 UTC" firstStartedPulling="2026-01-24 00:18:18.853143637 +0000 UTC m=+882.883114638" lastFinishedPulling="2026-01-24 00:19:28.017690606 +0000 UTC m=+952.047661607" observedRunningTime="2026-01-24 00:19:29.1294544 +0000 UTC m=+953.159425411" watchObservedRunningTime="2026-01-24 00:19:29.131678048 +0000 UTC m=+953.161649049" Jan 24 00:19:37 crc kubenswrapper[4676]: I0124 00:19:37.064803 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wqbcz" Jan 24 00:19:37 crc kubenswrapper[4676]: I0124 00:19:37.080987 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-h44qw" podStartSLOduration=11.213571732 podStartE2EDuration="1m20.08097095s" podCreationTimestamp="2026-01-24 00:18:17 +0000 UTC" firstStartedPulling="2026-01-24 00:18:19.098830241 +0000 UTC m=+883.128801242" lastFinishedPulling="2026-01-24 00:19:27.966229449 +0000 UTC m=+951.996200460" observedRunningTime="2026-01-24 00:19:29.145259246 +0000 UTC m=+953.175230267" watchObservedRunningTime="2026-01-24 00:19:37.08097095 +0000 UTC m=+961.110941951" Jan 24 00:19:37 crc kubenswrapper[4676]: I0124 00:19:37.114928 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4xp45" Jan 24 00:19:37 crc kubenswrapper[4676]: I0124 00:19:37.585499 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-tdqwh" Jan 24 00:19:39 crc kubenswrapper[4676]: I0124 00:19:39.364001 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:19:39 crc kubenswrapper[4676]: I0124 00:19:39.364086 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.501967 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkq64"] Jan 24 00:19:51 crc kubenswrapper[4676]: E0124 00:19:51.513522 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerName="extract-utilities" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.513557 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerName="extract-utilities" Jan 24 00:19:51 crc kubenswrapper[4676]: E0124 00:19:51.513577 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerName="extract-utilities" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.513584 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerName="extract-utilities" Jan 24 00:19:51 crc kubenswrapper[4676]: E0124 00:19:51.513595 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerName="registry-server" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.513611 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerName="registry-server" Jan 24 00:19:51 crc kubenswrapper[4676]: E0124 00:19:51.513630 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerName="extract-content" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.513637 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerName="extract-content" Jan 24 00:19:51 crc kubenswrapper[4676]: E0124 00:19:51.513645 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerName="registry-server" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.513651 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerName="registry-server" Jan 24 00:19:51 crc kubenswrapper[4676]: E0124 00:19:51.513662 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerName="extract-content" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.513669 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerName="extract-content" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.513868 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccfed4b-e148-4e09-8c96-f5a28fefd6b7" containerName="registry-server" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.513881 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb6d395-75c7-4f58-85cf-a97ad12bb288" containerName="registry-server" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.514569 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.518194 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.518419 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4wchf" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.518571 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.518697 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.529449 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkq64"] Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.551226 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mj7x9"] Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.552286 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.554467 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.597768 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mj7x9"] Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.615968 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-config\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.616018 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx2sb\" (UniqueName: \"kubernetes.io/projected/51b7a0ef-2050-425e-a0ea-b49b4c39662b-kube-api-access-jx2sb\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.616058 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.616176 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d55644-d050-4673-9c82-cba0c15d4537-config\") pod \"dnsmasq-dns-675f4bcbfc-tkq64\" (UID: \"67d55644-d050-4673-9c82-cba0c15d4537\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.616238 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npdwx\" (UniqueName: \"kubernetes.io/projected/67d55644-d050-4673-9c82-cba0c15d4537-kube-api-access-npdwx\") pod \"dnsmasq-dns-675f4bcbfc-tkq64\" (UID: \"67d55644-d050-4673-9c82-cba0c15d4537\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.718075 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-config\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.719035 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx2sb\" (UniqueName: \"kubernetes.io/projected/51b7a0ef-2050-425e-a0ea-b49b4c39662b-kube-api-access-jx2sb\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.719421 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.720077 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d55644-d050-4673-9c82-cba0c15d4537-config\") pod \"dnsmasq-dns-675f4bcbfc-tkq64\" (UID: \"67d55644-d050-4673-9c82-cba0c15d4537\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.720164 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npdwx\" (UniqueName: \"kubernetes.io/projected/67d55644-d050-4673-9c82-cba0c15d4537-kube-api-access-npdwx\") pod \"dnsmasq-dns-675f4bcbfc-tkq64\" (UID: \"67d55644-d050-4673-9c82-cba0c15d4537\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.718992 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-config\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.720047 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.723250 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d55644-d050-4673-9c82-cba0c15d4537-config\") pod \"dnsmasq-dns-675f4bcbfc-tkq64\" (UID: \"67d55644-d050-4673-9c82-cba0c15d4537\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.738294 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx2sb\" (UniqueName: \"kubernetes.io/projected/51b7a0ef-2050-425e-a0ea-b49b4c39662b-kube-api-access-jx2sb\") pod \"dnsmasq-dns-78dd6ddcc-mj7x9\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.738908 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npdwx\" (UniqueName: \"kubernetes.io/projected/67d55644-d050-4673-9c82-cba0c15d4537-kube-api-access-npdwx\") pod \"dnsmasq-dns-675f4bcbfc-tkq64\" (UID: \"67d55644-d050-4673-9c82-cba0c15d4537\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.828304 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:19:51 crc kubenswrapper[4676]: I0124 00:19:51.866184 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:19:52 crc kubenswrapper[4676]: I0124 00:19:52.302597 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mj7x9"] Jan 24 00:19:52 crc kubenswrapper[4676]: W0124 00:19:52.305016 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51b7a0ef_2050_425e_a0ea_b49b4c39662b.slice/crio-3bc62abc87bbd103f1233186e6fef1bec32233b081b4c253c02f750c9b801731 WatchSource:0}: Error finding container 3bc62abc87bbd103f1233186e6fef1bec32233b081b4c253c02f750c9b801731: Status 404 returned error can't find the container with id 3bc62abc87bbd103f1233186e6fef1bec32233b081b4c253c02f750c9b801731 Jan 24 00:19:52 crc kubenswrapper[4676]: I0124 00:19:52.366081 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkq64"] Jan 24 00:19:53 crc kubenswrapper[4676]: I0124 00:19:53.279647 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" event={"ID":"67d55644-d050-4673-9c82-cba0c15d4537","Type":"ContainerStarted","Data":"5eee3dc616db8987bcc5c04e8d439d4d666a187be99e08b6950bca9b712da938"} Jan 24 00:19:53 crc kubenswrapper[4676]: I0124 00:19:53.281812 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" event={"ID":"51b7a0ef-2050-425e-a0ea-b49b4c39662b","Type":"ContainerStarted","Data":"3bc62abc87bbd103f1233186e6fef1bec32233b081b4c253c02f750c9b801731"} Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.251274 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkq64"] Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.290181 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9rgdb"] Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.291200 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.318014 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9rgdb"] Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.372342 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.372555 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-config\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.372611 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhwbc\" (UniqueName: \"kubernetes.io/projected/b577ffd6-6d01-45d7-98e5-8c21ac8de280-kube-api-access-hhwbc\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.474527 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-config\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.474581 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhwbc\" (UniqueName: \"kubernetes.io/projected/b577ffd6-6d01-45d7-98e5-8c21ac8de280-kube-api-access-hhwbc\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.474633 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.475620 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-config\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.475672 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.521448 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhwbc\" (UniqueName: \"kubernetes.io/projected/b577ffd6-6d01-45d7-98e5-8c21ac8de280-kube-api-access-hhwbc\") pod \"dnsmasq-dns-666b6646f7-9rgdb\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.619856 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.623245 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mj7x9"] Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.657180 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zhmps"] Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.658608 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.735582 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zhmps"] Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.782473 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.782517 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmnp\" (UniqueName: \"kubernetes.io/projected/b500aa14-7f1d-4209-b0b7-619a37f58bcd-kube-api-access-4zmnp\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.782546 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-config\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.884389 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.884440 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmnp\" (UniqueName: \"kubernetes.io/projected/b500aa14-7f1d-4209-b0b7-619a37f58bcd-kube-api-access-4zmnp\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.884474 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-config\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.885303 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-config\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.888245 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:54 crc kubenswrapper[4676]: I0124 00:19:54.905281 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmnp\" (UniqueName: \"kubernetes.io/projected/b500aa14-7f1d-4209-b0b7-619a37f58bcd-kube-api-access-4zmnp\") pod \"dnsmasq-dns-57d769cc4f-zhmps\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.036255 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.237626 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9rgdb"] Jan 24 00:19:55 crc kubenswrapper[4676]: W0124 00:19:55.253569 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb577ffd6_6d01_45d7_98e5_8c21ac8de280.slice/crio-f87d0959391165a8c4c5cf866d530b4896a47324d10a6af28b679062fde837bb WatchSource:0}: Error finding container f87d0959391165a8c4c5cf866d530b4896a47324d10a6af28b679062fde837bb: Status 404 returned error can't find the container with id f87d0959391165a8c4c5cf866d530b4896a47324d10a6af28b679062fde837bb Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.303198 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" event={"ID":"b577ffd6-6d01-45d7-98e5-8c21ac8de280","Type":"ContainerStarted","Data":"f87d0959391165a8c4c5cf866d530b4896a47324d10a6af28b679062fde837bb"} Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.454066 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.455543 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.464968 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.465163 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.465288 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nxmk6" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.465426 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.465989 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.466099 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.466221 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.476453 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499444 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499506 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499539 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68d6466c-a6ff-40ba-952d-007b14efdfd3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499561 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v959\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-kube-api-access-7v959\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499590 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-config-data\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499612 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499817 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499844 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499870 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499902 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.499926 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68d6466c-a6ff-40ba-952d-007b14efdfd3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601101 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601145 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68d6466c-a6ff-40ba-952d-007b14efdfd3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601186 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601210 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601234 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68d6466c-a6ff-40ba-952d-007b14efdfd3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601250 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v959\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-kube-api-access-7v959\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601275 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-config-data\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601309 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601356 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601388 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601415 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.601713 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.603549 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.604117 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.604360 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.605156 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-config-data\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.609527 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.614198 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68d6466c-a6ff-40ba-952d-007b14efdfd3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.614485 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.653243 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.653334 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68d6466c-a6ff-40ba-952d-007b14efdfd3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.668173 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v959\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-kube-api-access-7v959\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.682266 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zhmps"] Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.693131 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.800985 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.804084 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.808392 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.813658 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.813832 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.814004 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.814111 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.815993 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.816118 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.817248 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7x4sc" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.820503 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909505 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909565 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909587 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36558de2-6aac-43e9-832d-2f96c46e8152-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909617 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909655 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909736 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36558de2-6aac-43e9-832d-2f96c46e8152-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909767 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909794 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909826 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909873 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdctt\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-kube-api-access-sdctt\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:55 crc kubenswrapper[4676]: I0124 00:19:55.909904 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.010985 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011265 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36558de2-6aac-43e9-832d-2f96c46e8152-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011296 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011341 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011391 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36558de2-6aac-43e9-832d-2f96c46e8152-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011410 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011455 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011474 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011498 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdctt\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-kube-api-access-sdctt\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011517 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.011554 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.012007 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.012764 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.014615 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.015060 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.015699 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.016355 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.017962 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.027993 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36558de2-6aac-43e9-832d-2f96c46e8152-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.034101 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.035870 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdctt\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-kube-api-access-sdctt\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.036229 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36558de2-6aac-43e9-832d-2f96c46e8152-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.056127 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.127243 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.325908 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" event={"ID":"b500aa14-7f1d-4209-b0b7-619a37f58bcd","Type":"ContainerStarted","Data":"98e1e2ee667220310f02f8a03eb3c9bedd6e3875fe7c7233b27e507391445c5f"} Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.347352 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:19:56 crc kubenswrapper[4676]: W0124 00:19:56.383310 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68d6466c_a6ff_40ba_952d_007b14efdfd3.slice/crio-32030fde9c3a43091d81748f9987539fcf3f4b60bda9f5dc9e85b5f47719d7eb WatchSource:0}: Error finding container 32030fde9c3a43091d81748f9987539fcf3f4b60bda9f5dc9e85b5f47719d7eb: Status 404 returned error can't find the container with id 32030fde9c3a43091d81748f9987539fcf3f4b60bda9f5dc9e85b5f47719d7eb Jan 24 00:19:56 crc kubenswrapper[4676]: I0124 00:19:56.618681 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:19:56 crc kubenswrapper[4676]: W0124 00:19:56.633805 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36558de2_6aac_43e9_832d_2f96c46e8152.slice/crio-a90a5dbbd754f4d91197a2d8faa756ac77a0f539415213117ebe9918b73147bc WatchSource:0}: Error finding container a90a5dbbd754f4d91197a2d8faa756ac77a0f539415213117ebe9918b73147bc: Status 404 returned error can't find the container with id a90a5dbbd754f4d91197a2d8faa756ac77a0f539415213117ebe9918b73147bc Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.092332 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.093630 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.112454 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.112887 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.113033 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-j4llt" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.113146 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.115984 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.154409 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.233901 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-config-data-default\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.233946 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2bbaae64-ac2d-43c6-8984-5483f2eb4211-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.233984 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbaae64-ac2d-43c6-8984-5483f2eb4211-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.234013 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.234078 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bbaae64-ac2d-43c6-8984-5483f2eb4211-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.234142 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.234280 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-kolla-config\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.234331 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvcc\" (UniqueName: \"kubernetes.io/projected/2bbaae64-ac2d-43c6-8984-5483f2eb4211-kube-api-access-tcvcc\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.335728 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbaae64-ac2d-43c6-8984-5483f2eb4211-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.335772 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.335793 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bbaae64-ac2d-43c6-8984-5483f2eb4211-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.335814 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.335860 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-kolla-config\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.335887 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvcc\" (UniqueName: \"kubernetes.io/projected/2bbaae64-ac2d-43c6-8984-5483f2eb4211-kube-api-access-tcvcc\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.335913 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-config-data-default\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.335931 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2bbaae64-ac2d-43c6-8984-5483f2eb4211-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.336318 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2bbaae64-ac2d-43c6-8984-5483f2eb4211-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.337421 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.337950 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-config-data-default\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.338805 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-kolla-config\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.339599 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bbaae64-ac2d-43c6-8984-5483f2eb4211-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.366270 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbaae64-ac2d-43c6-8984-5483f2eb4211-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.367574 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bbaae64-ac2d-43c6-8984-5483f2eb4211-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.369955 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvcc\" (UniqueName: \"kubernetes.io/projected/2bbaae64-ac2d-43c6-8984-5483f2eb4211-kube-api-access-tcvcc\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.374573 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"2bbaae64-ac2d-43c6-8984-5483f2eb4211\") " pod="openstack/openstack-galera-0" Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.399835 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68d6466c-a6ff-40ba-952d-007b14efdfd3","Type":"ContainerStarted","Data":"32030fde9c3a43091d81748f9987539fcf3f4b60bda9f5dc9e85b5f47719d7eb"} Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.420240 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36558de2-6aac-43e9-832d-2f96c46e8152","Type":"ContainerStarted","Data":"a90a5dbbd754f4d91197a2d8faa756ac77a0f539415213117ebe9918b73147bc"} Jan 24 00:19:57 crc kubenswrapper[4676]: I0124 00:19:57.431726 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.068925 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.309181 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.312572 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.312727 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.315744 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.317718 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-5czjs" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.317812 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.317972 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.458239 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19365292-50d8-4e94-952f-2df7ee20f0ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.458311 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.458352 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.458472 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19365292-50d8-4e94-952f-2df7ee20f0ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.458522 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.458552 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.458573 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19365292-50d8-4e94-952f-2df7ee20f0ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.458611 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfb6c\" (UniqueName: \"kubernetes.io/projected/19365292-50d8-4e94-952f-2df7ee20f0ba-kube-api-access-cfb6c\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.513839 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.515601 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.521698 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-x2pp8" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.525364 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.525646 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.541254 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.559482 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.559536 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19365292-50d8-4e94-952f-2df7ee20f0ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.559576 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfb6c\" (UniqueName: \"kubernetes.io/projected/19365292-50d8-4e94-952f-2df7ee20f0ba-kube-api-access-cfb6c\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.559613 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19365292-50d8-4e94-952f-2df7ee20f0ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.559647 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.559676 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.559697 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19365292-50d8-4e94-952f-2df7ee20f0ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.559734 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.560345 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.560432 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.560651 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19365292-50d8-4e94-952f-2df7ee20f0ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.560730 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.564538 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19365292-50d8-4e94-952f-2df7ee20f0ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.580698 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19365292-50d8-4e94-952f-2df7ee20f0ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.584731 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19365292-50d8-4e94-952f-2df7ee20f0ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.591021 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfb6c\" (UniqueName: \"kubernetes.io/projected/19365292-50d8-4e94-952f-2df7ee20f0ba-kube-api-access-cfb6c\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.604852 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"19365292-50d8-4e94-952f-2df7ee20f0ba\") " pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.642980 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.661798 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/545f6045-cf2f-4d4b-91d8-227148ddd71a-config-data\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.661844 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59wl\" (UniqueName: \"kubernetes.io/projected/545f6045-cf2f-4d4b-91d8-227148ddd71a-kube-api-access-j59wl\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.661872 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/545f6045-cf2f-4d4b-91d8-227148ddd71a-kolla-config\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.661897 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545f6045-cf2f-4d4b-91d8-227148ddd71a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.662247 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f6045-cf2f-4d4b-91d8-227148ddd71a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.763874 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/545f6045-cf2f-4d4b-91d8-227148ddd71a-config-data\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.763906 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59wl\" (UniqueName: \"kubernetes.io/projected/545f6045-cf2f-4d4b-91d8-227148ddd71a-kube-api-access-j59wl\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.763940 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/545f6045-cf2f-4d4b-91d8-227148ddd71a-kolla-config\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.763964 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545f6045-cf2f-4d4b-91d8-227148ddd71a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.765276 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f6045-cf2f-4d4b-91d8-227148ddd71a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.768139 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/545f6045-cf2f-4d4b-91d8-227148ddd71a-config-data\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.769131 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/545f6045-cf2f-4d4b-91d8-227148ddd71a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.769673 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/545f6045-cf2f-4d4b-91d8-227148ddd71a-kolla-config\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.789419 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545f6045-cf2f-4d4b-91d8-227148ddd71a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.794562 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59wl\" (UniqueName: \"kubernetes.io/projected/545f6045-cf2f-4d4b-91d8-227148ddd71a-kube-api-access-j59wl\") pod \"memcached-0\" (UID: \"545f6045-cf2f-4d4b-91d8-227148ddd71a\") " pod="openstack/memcached-0" Jan 24 00:19:58 crc kubenswrapper[4676]: I0124 00:19:58.835968 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 24 00:20:00 crc kubenswrapper[4676]: I0124 00:20:00.339865 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:20:00 crc kubenswrapper[4676]: I0124 00:20:00.343102 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 00:20:00 crc kubenswrapper[4676]: I0124 00:20:00.346828 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bcjhj" Jan 24 00:20:00 crc kubenswrapper[4676]: I0124 00:20:00.366867 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:20:00 crc kubenswrapper[4676]: I0124 00:20:00.460127 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqg2d\" (UniqueName: \"kubernetes.io/projected/c626d726-d406-41c7-8e66-07cc148d3aa7-kube-api-access-vqg2d\") pod \"kube-state-metrics-0\" (UID: \"c626d726-d406-41c7-8e66-07cc148d3aa7\") " pod="openstack/kube-state-metrics-0" Jan 24 00:20:00 crc kubenswrapper[4676]: I0124 00:20:00.561736 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqg2d\" (UniqueName: \"kubernetes.io/projected/c626d726-d406-41c7-8e66-07cc148d3aa7-kube-api-access-vqg2d\") pod \"kube-state-metrics-0\" (UID: \"c626d726-d406-41c7-8e66-07cc148d3aa7\") " pod="openstack/kube-state-metrics-0" Jan 24 00:20:00 crc kubenswrapper[4676]: I0124 00:20:00.597628 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqg2d\" (UniqueName: \"kubernetes.io/projected/c626d726-d406-41c7-8e66-07cc148d3aa7-kube-api-access-vqg2d\") pod \"kube-state-metrics-0\" (UID: \"c626d726-d406-41c7-8e66-07cc148d3aa7\") " pod="openstack/kube-state-metrics-0" Jan 24 00:20:00 crc kubenswrapper[4676]: I0124 00:20:00.672314 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.237190 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kvzq7"] Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.239302 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.246925 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvzq7"] Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.289161 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-catalog-content\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.289227 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-utilities\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.289277 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk82l\" (UniqueName: \"kubernetes.io/projected/3df4276b-01b4-403f-b040-85d5a8a9ef03-kube-api-access-tk82l\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.390543 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-catalog-content\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.390613 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-utilities\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.390665 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk82l\" (UniqueName: \"kubernetes.io/projected/3df4276b-01b4-403f-b040-85d5a8a9ef03-kube-api-access-tk82l\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.391168 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-catalog-content\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.391226 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-utilities\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.408993 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk82l\" (UniqueName: \"kubernetes.io/projected/3df4276b-01b4-403f-b040-85d5a8a9ef03-kube-api-access-tk82l\") pod \"certified-operators-kvzq7\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:02 crc kubenswrapper[4676]: I0124 00:20:02.567290 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.018794 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jsd4n"] Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.020703 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.025133 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-brqw8" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.025217 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.026287 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.038478 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jsd4n"] Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.084012 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6sl9q"] Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.088644 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.091394 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6sl9q"] Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.140429 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7adbcf83-efbd-4e8d-97e5-f8768463284a-scripts\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.140490 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8blx\" (UniqueName: \"kubernetes.io/projected/7adbcf83-efbd-4e8d-97e5-f8768463284a-kube-api-access-m8blx\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.140559 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7adbcf83-efbd-4e8d-97e5-f8768463284a-combined-ca-bundle\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.140649 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7adbcf83-efbd-4e8d-97e5-f8768463284a-ovn-controller-tls-certs\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.140703 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-log-ovn\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.140759 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-run-ovn\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.140795 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-run\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242006 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-lib\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242048 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7adbcf83-efbd-4e8d-97e5-f8768463284a-combined-ca-bundle\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242098 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7adbcf83-efbd-4e8d-97e5-f8768463284a-ovn-controller-tls-certs\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242117 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/427fbd2d-16ef-44a6-a71d-8172f56b863d-scripts\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242147 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-etc-ovs\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242174 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-log-ovn\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242201 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-run-ovn\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242226 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-run\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242249 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpbnv\" (UniqueName: \"kubernetes.io/projected/427fbd2d-16ef-44a6-a71d-8172f56b863d-kube-api-access-fpbnv\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242269 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-run\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242287 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7adbcf83-efbd-4e8d-97e5-f8768463284a-scripts\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242311 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-log\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.242332 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8blx\" (UniqueName: \"kubernetes.io/projected/7adbcf83-efbd-4e8d-97e5-f8768463284a-kube-api-access-m8blx\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.243764 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-run-ovn\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.243837 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-log-ovn\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.245346 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7adbcf83-efbd-4e8d-97e5-f8768463284a-scripts\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.245492 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7adbcf83-efbd-4e8d-97e5-f8768463284a-var-run\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.247318 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7adbcf83-efbd-4e8d-97e5-f8768463284a-ovn-controller-tls-certs\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.258647 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7adbcf83-efbd-4e8d-97e5-f8768463284a-combined-ca-bundle\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.271958 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8blx\" (UniqueName: \"kubernetes.io/projected/7adbcf83-efbd-4e8d-97e5-f8768463284a-kube-api-access-m8blx\") pod \"ovn-controller-jsd4n\" (UID: \"7adbcf83-efbd-4e8d-97e5-f8768463284a\") " pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.337815 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.343665 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/427fbd2d-16ef-44a6-a71d-8172f56b863d-scripts\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.343820 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-etc-ovs\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.343971 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpbnv\" (UniqueName: \"kubernetes.io/projected/427fbd2d-16ef-44a6-a71d-8172f56b863d-kube-api-access-fpbnv\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.344076 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-run\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.344190 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-log\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.344299 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-lib\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.344081 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-etc-ovs\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.344137 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-run\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.344345 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-log\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.344610 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/427fbd2d-16ef-44a6-a71d-8172f56b863d-var-lib\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.346184 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/427fbd2d-16ef-44a6-a71d-8172f56b863d-scripts\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.362414 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpbnv\" (UniqueName: \"kubernetes.io/projected/427fbd2d-16ef-44a6-a71d-8172f56b863d-kube-api-access-fpbnv\") pod \"ovn-controller-ovs-6sl9q\" (UID: \"427fbd2d-16ef-44a6-a71d-8172f56b863d\") " pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.402593 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.895815 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.897214 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.900880 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.901066 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wj2kh" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.901089 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.901204 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.906389 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 24 00:20:05 crc kubenswrapper[4676]: I0124 00:20:05.932885 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.054446 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bff778a-b10f-4ba9-a12f-f4086608fd30-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.054512 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.054554 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.054619 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3bff778a-b10f-4ba9-a12f-f4086608fd30-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.054684 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.054737 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bff778a-b10f-4ba9-a12f-f4086608fd30-config\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.054782 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.054816 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tsl7\" (UniqueName: \"kubernetes.io/projected/3bff778a-b10f-4ba9-a12f-f4086608fd30-kube-api-access-6tsl7\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.156481 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bff778a-b10f-4ba9-a12f-f4086608fd30-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.156554 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.156597 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.156662 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3bff778a-b10f-4ba9-a12f-f4086608fd30-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.156755 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.156816 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bff778a-b10f-4ba9-a12f-f4086608fd30-config\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.156872 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.156905 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tsl7\" (UniqueName: \"kubernetes.io/projected/3bff778a-b10f-4ba9-a12f-f4086608fd30-kube-api-access-6tsl7\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.157099 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.157551 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3bff778a-b10f-4ba9-a12f-f4086608fd30-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.158621 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bff778a-b10f-4ba9-a12f-f4086608fd30-config\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.162010 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bff778a-b10f-4ba9-a12f-f4086608fd30-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.169526 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.169742 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.169844 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bff778a-b10f-4ba9-a12f-f4086608fd30-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.174857 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tsl7\" (UniqueName: \"kubernetes.io/projected/3bff778a-b10f-4ba9-a12f-f4086608fd30-kube-api-access-6tsl7\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.185870 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3bff778a-b10f-4ba9-a12f-f4086608fd30\") " pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:06 crc kubenswrapper[4676]: I0124 00:20:06.222900 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.726615 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.728682 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.731310 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.732823 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.733346 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-vwpk5" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.735860 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.743924 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.880579 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.880684 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.880766 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.880898 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.880957 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.881002 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.881091 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.881154 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrj7g\" (UniqueName: \"kubernetes.io/projected/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-kube-api-access-zrj7g\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.983005 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.983466 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrj7g\" (UniqueName: \"kubernetes.io/projected/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-kube-api-access-zrj7g\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.984012 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.984173 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.985154 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.985293 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.985793 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.986250 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.985194 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.985093 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.985749 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.986655 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.990848 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:07 crc kubenswrapper[4676]: I0124 00:20:07.992078 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:08 crc kubenswrapper[4676]: I0124 00:20:07.999993 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:08 crc kubenswrapper[4676]: I0124 00:20:08.008196 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrj7g\" (UniqueName: \"kubernetes.io/projected/f3648c65-9fcb-4a9e-b4cb-d8437dc00141-kube-api-access-zrj7g\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:08 crc kubenswrapper[4676]: I0124 00:20:08.013812 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f3648c65-9fcb-4a9e-b4cb-d8437dc00141\") " pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:08 crc kubenswrapper[4676]: I0124 00:20:08.047023 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:09 crc kubenswrapper[4676]: I0124 00:20:09.363989 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:20:09 crc kubenswrapper[4676]: I0124 00:20:09.364071 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:20:09 crc kubenswrapper[4676]: I0124 00:20:09.364129 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:20:09 crc kubenswrapper[4676]: I0124 00:20:09.365059 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2687110039e3aba350a72ed3647bbafb008d22f301a8b50baa7159c6eca5ba33"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:20:09 crc kubenswrapper[4676]: I0124 00:20:09.365144 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://2687110039e3aba350a72ed3647bbafb008d22f301a8b50baa7159c6eca5ba33" gracePeriod=600 Jan 24 00:20:09 crc kubenswrapper[4676]: I0124 00:20:09.568723 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="2687110039e3aba350a72ed3647bbafb008d22f301a8b50baa7159c6eca5ba33" exitCode=0 Jan 24 00:20:09 crc kubenswrapper[4676]: I0124 00:20:09.568792 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"2687110039e3aba350a72ed3647bbafb008d22f301a8b50baa7159c6eca5ba33"} Jan 24 00:20:09 crc kubenswrapper[4676]: I0124 00:20:09.569089 4676 scope.go:117] "RemoveContainer" containerID="daf4dfd81dc7faee8c5a37cce872ffde5731f2d91708788dd42d2993fec18ba6" Jan 24 00:20:11 crc kubenswrapper[4676]: I0124 00:20:11.588731 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2bbaae64-ac2d-43c6-8984-5483f2eb4211","Type":"ContainerStarted","Data":"a1c59ca6b94252368f835fcfdca3061486fe13cfd8944f4df5139db16f75750c"} Jan 24 00:20:14 crc kubenswrapper[4676]: E0124 00:20:14.660205 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 24 00:20:14 crc kubenswrapper[4676]: E0124 00:20:14.660661 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7v959,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(68d6466c-a6ff-40ba-952d-007b14efdfd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:20:14 crc kubenswrapper[4676]: E0124 00:20:14.661948 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" Jan 24 00:20:14 crc kubenswrapper[4676]: E0124 00:20:14.690171 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 24 00:20:14 crc kubenswrapper[4676]: E0124 00:20:14.690558 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdctt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(36558de2-6aac-43e9-832d-2f96c46e8152): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:20:14 crc kubenswrapper[4676]: E0124 00:20:14.692228 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" Jan 24 00:20:15 crc kubenswrapper[4676]: I0124 00:20:15.037013 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 24 00:20:15 crc kubenswrapper[4676]: E0124 00:20:15.624783 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" Jan 24 00:20:15 crc kubenswrapper[4676]: E0124 00:20:15.630810 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" Jan 24 00:20:21 crc kubenswrapper[4676]: I0124 00:20:21.674661 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"545f6045-cf2f-4d4b-91d8-227148ddd71a","Type":"ContainerStarted","Data":"129868f7f45e8628b78e5f212f82e735051410c9a3c82315b54413272f8815b7"} Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.868401 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.868588 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jx2sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mj7x9_openstack(51b7a0ef-2050-425e-a0ea-b49b4c39662b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.869806 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" podUID="51b7a0ef-2050-425e-a0ea-b49b4c39662b" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.881348 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.881500 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zmnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-zhmps_openstack(b500aa14-7f1d-4209-b0b7-619a37f58bcd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.883593 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" podUID="b500aa14-7f1d-4209-b0b7-619a37f58bcd" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.908644 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.908783 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhwbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-9rgdb_openstack(b577ffd6-6d01-45d7-98e5-8c21ac8de280): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.909966 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" podUID="b577ffd6-6d01-45d7-98e5-8c21ac8de280" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.922110 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.922533 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npdwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-tkq64_openstack(67d55644-d050-4673-9c82-cba0c15d4537): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:20:22 crc kubenswrapper[4676]: E0124 00:20:22.924238 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" podUID="67d55644-d050-4673-9c82-cba0c15d4537" Jan 24 00:20:23 crc kubenswrapper[4676]: I0124 00:20:23.434280 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 00:20:23 crc kubenswrapper[4676]: E0124 00:20:23.700414 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" podUID="b500aa14-7f1d-4209-b0b7-619a37f58bcd" Jan 24 00:20:23 crc kubenswrapper[4676]: E0124 00:20:23.700536 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" podUID="b577ffd6-6d01-45d7-98e5-8c21ac8de280" Jan 24 00:20:25 crc kubenswrapper[4676]: E0124 00:20:25.182458 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 24 00:20:25 crc kubenswrapper[4676]: E0124 00:20:25.182907 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcvcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(2bbaae64-ac2d-43c6-8984-5483f2eb4211): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:20:25 crc kubenswrapper[4676]: E0124 00:20:25.184435 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="2bbaae64-ac2d-43c6-8984-5483f2eb4211" Jan 24 00:20:25 crc kubenswrapper[4676]: W0124 00:20:25.205661 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3648c65_9fcb_4a9e_b4cb_d8437dc00141.slice/crio-1a8a42fc79fdf52f814acc2ac5d113099eb491ced0b8ac0af600ea3a850f3feb WatchSource:0}: Error finding container 1a8a42fc79fdf52f814acc2ac5d113099eb491ced0b8ac0af600ea3a850f3feb: Status 404 returned error can't find the container with id 1a8a42fc79fdf52f814acc2ac5d113099eb491ced0b8ac0af600ea3a850f3feb Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.304971 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.368120 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.397306 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d55644-d050-4673-9c82-cba0c15d4537-config\") pod \"67d55644-d050-4673-9c82-cba0c15d4537\" (UID: \"67d55644-d050-4673-9c82-cba0c15d4537\") " Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.397347 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npdwx\" (UniqueName: \"kubernetes.io/projected/67d55644-d050-4673-9c82-cba0c15d4537-kube-api-access-npdwx\") pod \"67d55644-d050-4673-9c82-cba0c15d4537\" (UID: \"67d55644-d050-4673-9c82-cba0c15d4537\") " Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.401498 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d55644-d050-4673-9c82-cba0c15d4537-config" (OuterVolumeSpecName: "config") pod "67d55644-d050-4673-9c82-cba0c15d4537" (UID: "67d55644-d050-4673-9c82-cba0c15d4537"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.411097 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d55644-d050-4673-9c82-cba0c15d4537-kube-api-access-npdwx" (OuterVolumeSpecName: "kube-api-access-npdwx") pod "67d55644-d050-4673-9c82-cba0c15d4537" (UID: "67d55644-d050-4673-9c82-cba0c15d4537"). InnerVolumeSpecName "kube-api-access-npdwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.501969 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-config\") pod \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.502073 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx2sb\" (UniqueName: \"kubernetes.io/projected/51b7a0ef-2050-425e-a0ea-b49b4c39662b-kube-api-access-jx2sb\") pod \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.502115 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-dns-svc\") pod \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\" (UID: \"51b7a0ef-2050-425e-a0ea-b49b4c39662b\") " Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.502400 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d55644-d050-4673-9c82-cba0c15d4537-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.502417 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npdwx\" (UniqueName: \"kubernetes.io/projected/67d55644-d050-4673-9c82-cba0c15d4537-kube-api-access-npdwx\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.502703 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-config" (OuterVolumeSpecName: "config") pod "51b7a0ef-2050-425e-a0ea-b49b4c39662b" (UID: "51b7a0ef-2050-425e-a0ea-b49b4c39662b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.502884 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51b7a0ef-2050-425e-a0ea-b49b4c39662b" (UID: "51b7a0ef-2050-425e-a0ea-b49b4c39662b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.505639 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b7a0ef-2050-425e-a0ea-b49b4c39662b-kube-api-access-jx2sb" (OuterVolumeSpecName: "kube-api-access-jx2sb") pod "51b7a0ef-2050-425e-a0ea-b49b4c39662b" (UID: "51b7a0ef-2050-425e-a0ea-b49b4c39662b"). InnerVolumeSpecName "kube-api-access-jx2sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.604498 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.604532 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx2sb\" (UniqueName: \"kubernetes.io/projected/51b7a0ef-2050-425e-a0ea-b49b4c39662b-kube-api-access-jx2sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.604555 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b7a0ef-2050-425e-a0ea-b49b4c39662b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.717412 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" event={"ID":"67d55644-d050-4673-9c82-cba0c15d4537","Type":"ContainerDied","Data":"5eee3dc616db8987bcc5c04e8d439d4d666a187be99e08b6950bca9b712da938"} Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.717425 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tkq64" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.718904 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" event={"ID":"51b7a0ef-2050-425e-a0ea-b49b4c39662b","Type":"ContainerDied","Data":"3bc62abc87bbd103f1233186e6fef1bec32233b081b4c253c02f750c9b801731"} Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.718964 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mj7x9" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.730912 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3648c65-9fcb-4a9e-b4cb-d8437dc00141","Type":"ContainerStarted","Data":"1a8a42fc79fdf52f814acc2ac5d113099eb491ced0b8ac0af600ea3a850f3feb"} Jan 24 00:20:25 crc kubenswrapper[4676]: E0124 00:20:25.737500 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="2bbaae64-ac2d-43c6-8984-5483f2eb4211" Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.771733 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.829414 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkq64"] Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.844008 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkq64"] Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.869456 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mj7x9"] Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.874789 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mj7x9"] Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.959150 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.967879 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6sl9q"] Jan 24 00:20:25 crc kubenswrapper[4676]: I0124 00:20:25.984303 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvzq7"] Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.030627 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.125329 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jsd4n"] Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.276848 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b7a0ef-2050-425e-a0ea-b49b4c39662b" path="/var/lib/kubelet/pods/51b7a0ef-2050-425e-a0ea-b49b4c39662b/volumes" Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.281347 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d55644-d050-4673-9c82-cba0c15d4537" path="/var/lib/kubelet/pods/67d55644-d050-4673-9c82-cba0c15d4537/volumes" Jan 24 00:20:26 crc kubenswrapper[4676]: W0124 00:20:26.594266 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3df4276b_01b4_403f_b040_85d5a8a9ef03.slice/crio-88c849f82d3193bf091f531d74d4fde6ab5977e0c495ee6e86efeb15c6348494 WatchSource:0}: Error finding container 88c849f82d3193bf091f531d74d4fde6ab5977e0c495ee6e86efeb15c6348494: Status 404 returned error can't find the container with id 88c849f82d3193bf091f531d74d4fde6ab5977e0c495ee6e86efeb15c6348494 Jan 24 00:20:26 crc kubenswrapper[4676]: W0124 00:20:26.601697 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19365292_50d8_4e94_952f_2df7ee20f0ba.slice/crio-cf388a4e0df4daefa3cbe9e8d89fe266c7758649deb6e360cf7df44e13eba394 WatchSource:0}: Error finding container cf388a4e0df4daefa3cbe9e8d89fe266c7758649deb6e360cf7df44e13eba394: Status 404 returned error can't find the container with id cf388a4e0df4daefa3cbe9e8d89fe266c7758649deb6e360cf7df44e13eba394 Jan 24 00:20:26 crc kubenswrapper[4676]: W0124 00:20:26.603963 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod427fbd2d_16ef_44a6_a71d_8172f56b863d.slice/crio-c93aad98f7fa07e20203ee19cebf86d71af1c830ae464b14a02d327a2041bcf7 WatchSource:0}: Error finding container c93aad98f7fa07e20203ee19cebf86d71af1c830ae464b14a02d327a2041bcf7: Status 404 returned error can't find the container with id c93aad98f7fa07e20203ee19cebf86d71af1c830ae464b14a02d327a2041bcf7 Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.748665 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sl9q" event={"ID":"427fbd2d-16ef-44a6-a71d-8172f56b863d","Type":"ContainerStarted","Data":"c93aad98f7fa07e20203ee19cebf86d71af1c830ae464b14a02d327a2041bcf7"} Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.752806 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"19365292-50d8-4e94-952f-2df7ee20f0ba","Type":"ContainerStarted","Data":"cf388a4e0df4daefa3cbe9e8d89fe266c7758649deb6e360cf7df44e13eba394"} Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.754531 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c626d726-d406-41c7-8e66-07cc148d3aa7","Type":"ContainerStarted","Data":"a0d93a838adacca6f5b952cbfd2b7bfc9bf0b2ee8a2c8c9cf055df1b2e54dbca"} Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.756120 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsd4n" event={"ID":"7adbcf83-efbd-4e8d-97e5-f8768463284a","Type":"ContainerStarted","Data":"5e5c0ff40f6a2e48e361d3f8d5e25ed79d68cc33338a1264dfd1615c43c39542"} Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.757252 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvzq7" event={"ID":"3df4276b-01b4-403f-b040-85d5a8a9ef03","Type":"ContainerStarted","Data":"88c849f82d3193bf091f531d74d4fde6ab5977e0c495ee6e86efeb15c6348494"} Jan 24 00:20:26 crc kubenswrapper[4676]: I0124 00:20:26.758520 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3bff778a-b10f-4ba9-a12f-f4086608fd30","Type":"ContainerStarted","Data":"2482cd974ac4c9436d361422f20d218cb4125f3a7bc21fdfea7285523bc780a7"} Jan 24 00:20:27 crc kubenswrapper[4676]: I0124 00:20:27.767843 4676 generic.go:334] "Generic (PLEG): container finished" podID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerID="82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e" exitCode=0 Jan 24 00:20:27 crc kubenswrapper[4676]: I0124 00:20:27.767942 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvzq7" event={"ID":"3df4276b-01b4-403f-b040-85d5a8a9ef03","Type":"ContainerDied","Data":"82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e"} Jan 24 00:20:27 crc kubenswrapper[4676]: I0124 00:20:27.771403 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"545f6045-cf2f-4d4b-91d8-227148ddd71a","Type":"ContainerStarted","Data":"e85e704d14dd52ead197be8059609934dceae01cbb53278d3e8039062faf3104"} Jan 24 00:20:27 crc kubenswrapper[4676]: I0124 00:20:27.771481 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 24 00:20:27 crc kubenswrapper[4676]: I0124 00:20:27.775431 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"f13744ab61f6ff84c30249dfd3e19836649d7bb6c4e4a3db144939c565fd684d"} Jan 24 00:20:27 crc kubenswrapper[4676]: I0124 00:20:27.844434 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.634153511 podStartE2EDuration="29.844416332s" podCreationTimestamp="2026-01-24 00:19:58 +0000 UTC" firstStartedPulling="2026-01-24 00:20:21.45065931 +0000 UTC m=+1005.480630331" lastFinishedPulling="2026-01-24 00:20:26.660922141 +0000 UTC m=+1010.690893152" observedRunningTime="2026-01-24 00:20:27.81791752 +0000 UTC m=+1011.847888521" watchObservedRunningTime="2026-01-24 00:20:27.844416332 +0000 UTC m=+1011.874387333" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.848755 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mq6ns"] Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.850109 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.857591 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.873689 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mq6ns"] Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.893633 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/21550436-cd71-46d8-838e-c51c19ddf8ff-ovs-rundir\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.893704 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/21550436-cd71-46d8-838e-c51c19ddf8ff-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.893735 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blhn8\" (UniqueName: \"kubernetes.io/projected/21550436-cd71-46d8-838e-c51c19ddf8ff-kube-api-access-blhn8\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.893801 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21550436-cd71-46d8-838e-c51c19ddf8ff-config\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.893825 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21550436-cd71-46d8-838e-c51c19ddf8ff-combined-ca-bundle\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.893850 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/21550436-cd71-46d8-838e-c51c19ddf8ff-ovn-rundir\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.988724 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zhmps"] Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.994849 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21550436-cd71-46d8-838e-c51c19ddf8ff-config\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.994894 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21550436-cd71-46d8-838e-c51c19ddf8ff-combined-ca-bundle\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.994926 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/21550436-cd71-46d8-838e-c51c19ddf8ff-ovn-rundir\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.994970 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/21550436-cd71-46d8-838e-c51c19ddf8ff-ovs-rundir\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.994990 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/21550436-cd71-46d8-838e-c51c19ddf8ff-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.995013 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blhn8\" (UniqueName: \"kubernetes.io/projected/21550436-cd71-46d8-838e-c51c19ddf8ff-kube-api-access-blhn8\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.995874 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21550436-cd71-46d8-838e-c51c19ddf8ff-config\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.999512 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/21550436-cd71-46d8-838e-c51c19ddf8ff-ovs-rundir\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:29 crc kubenswrapper[4676]: I0124 00:20:29.999555 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/21550436-cd71-46d8-838e-c51c19ddf8ff-ovn-rundir\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.015893 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/21550436-cd71-46d8-838e-c51c19ddf8ff-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.017961 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21550436-cd71-46d8-838e-c51c19ddf8ff-combined-ca-bundle\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.032261 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blhn8\" (UniqueName: \"kubernetes.io/projected/21550436-cd71-46d8-838e-c51c19ddf8ff-kube-api-access-blhn8\") pod \"ovn-controller-metrics-mq6ns\" (UID: \"21550436-cd71-46d8-838e-c51c19ddf8ff\") " pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.054425 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-qhgmq"] Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.056524 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.058984 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.066908 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-qhgmq"] Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.096141 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.096389 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7ts\" (UniqueName: \"kubernetes.io/projected/88763e6b-93f1-4b9d-bb0c-5c487659691a-kube-api-access-gx7ts\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.096450 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-config\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.096507 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.188213 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mq6ns" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.199152 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.199197 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7ts\" (UniqueName: \"kubernetes.io/projected/88763e6b-93f1-4b9d-bb0c-5c487659691a-kube-api-access-gx7ts\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.199261 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-config\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.199325 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.200443 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-config\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.200473 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.200502 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.223404 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7ts\" (UniqueName: \"kubernetes.io/projected/88763e6b-93f1-4b9d-bb0c-5c487659691a-kube-api-access-gx7ts\") pod \"dnsmasq-dns-5bf47b49b7-qhgmq\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.327808 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9rgdb"] Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.366609 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbwwk"] Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.367806 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.386295 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.396322 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbwwk"] Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.402176 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-dns-svc\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.402238 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.402267 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.402360 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-config\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.402409 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btl5m\" (UniqueName: \"kubernetes.io/projected/1f21d906-2974-4e56-b11f-d888e452c565-kube-api-access-btl5m\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.425388 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.505117 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.505619 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-config\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.506573 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-config\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.506624 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btl5m\" (UniqueName: \"kubernetes.io/projected/1f21d906-2974-4e56-b11f-d888e452c565-kube-api-access-btl5m\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.506717 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-dns-svc\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.506750 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.509952 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.509988 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-dns-svc\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.511478 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.546335 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btl5m\" (UniqueName: \"kubernetes.io/projected/1f21d906-2974-4e56-b11f-d888e452c565-kube-api-access-btl5m\") pod \"dnsmasq-dns-8554648995-sbwwk\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:30 crc kubenswrapper[4676]: I0124 00:20:30.712654 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:31 crc kubenswrapper[4676]: I0124 00:20:31.953083 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:20:31 crc kubenswrapper[4676]: I0124 00:20:31.960667 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.034723 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhwbc\" (UniqueName: \"kubernetes.io/projected/b577ffd6-6d01-45d7-98e5-8c21ac8de280-kube-api-access-hhwbc\") pod \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.034806 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-dns-svc\") pod \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.034913 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-config\") pod \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.034978 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-dns-svc\") pod \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.035097 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zmnp\" (UniqueName: \"kubernetes.io/projected/b500aa14-7f1d-4209-b0b7-619a37f58bcd-kube-api-access-4zmnp\") pod \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\" (UID: \"b500aa14-7f1d-4209-b0b7-619a37f58bcd\") " Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.035146 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-config\") pod \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\" (UID: \"b577ffd6-6d01-45d7-98e5-8c21ac8de280\") " Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.036529 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-config" (OuterVolumeSpecName: "config") pod "b577ffd6-6d01-45d7-98e5-8c21ac8de280" (UID: "b577ffd6-6d01-45d7-98e5-8c21ac8de280"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.037460 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b577ffd6-6d01-45d7-98e5-8c21ac8de280" (UID: "b577ffd6-6d01-45d7-98e5-8c21ac8de280"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.037547 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-config" (OuterVolumeSpecName: "config") pod "b500aa14-7f1d-4209-b0b7-619a37f58bcd" (UID: "b500aa14-7f1d-4209-b0b7-619a37f58bcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.037913 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b500aa14-7f1d-4209-b0b7-619a37f58bcd" (UID: "b500aa14-7f1d-4209-b0b7-619a37f58bcd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.051528 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b577ffd6-6d01-45d7-98e5-8c21ac8de280-kube-api-access-hhwbc" (OuterVolumeSpecName: "kube-api-access-hhwbc") pod "b577ffd6-6d01-45d7-98e5-8c21ac8de280" (UID: "b577ffd6-6d01-45d7-98e5-8c21ac8de280"). InnerVolumeSpecName "kube-api-access-hhwbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.051608 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b500aa14-7f1d-4209-b0b7-619a37f58bcd-kube-api-access-4zmnp" (OuterVolumeSpecName: "kube-api-access-4zmnp") pod "b500aa14-7f1d-4209-b0b7-619a37f58bcd" (UID: "b500aa14-7f1d-4209-b0b7-619a37f58bcd"). InnerVolumeSpecName "kube-api-access-4zmnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.139060 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhwbc\" (UniqueName: \"kubernetes.io/projected/b577ffd6-6d01-45d7-98e5-8c21ac8de280-kube-api-access-hhwbc\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.139103 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.139115 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.139127 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b500aa14-7f1d-4209-b0b7-619a37f58bcd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.139141 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zmnp\" (UniqueName: \"kubernetes.io/projected/b500aa14-7f1d-4209-b0b7-619a37f58bcd-kube-api-access-4zmnp\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.139152 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b577ffd6-6d01-45d7-98e5-8c21ac8de280-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.819666 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" event={"ID":"b500aa14-7f1d-4209-b0b7-619a37f58bcd","Type":"ContainerDied","Data":"98e1e2ee667220310f02f8a03eb3c9bedd6e3875fe7c7233b27e507391445c5f"} Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.819718 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zhmps" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.821866 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.823924 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9rgdb" event={"ID":"b577ffd6-6d01-45d7-98e5-8c21ac8de280","Type":"ContainerDied","Data":"f87d0959391165a8c4c5cf866d530b4896a47324d10a6af28b679062fde837bb"} Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.869055 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9rgdb"] Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.895467 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9rgdb"] Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.907151 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zhmps"] Jan 24 00:20:32 crc kubenswrapper[4676]: I0124 00:20:32.914414 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zhmps"] Jan 24 00:20:33 crc kubenswrapper[4676]: I0124 00:20:33.832588 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mq6ns"] Jan 24 00:20:33 crc kubenswrapper[4676]: I0124 00:20:33.837574 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 24 00:20:33 crc kubenswrapper[4676]: I0124 00:20:33.973301 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-qhgmq"] Jan 24 00:20:34 crc kubenswrapper[4676]: I0124 00:20:34.032816 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbwwk"] Jan 24 00:20:34 crc kubenswrapper[4676]: W0124 00:20:34.049412 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88763e6b_93f1_4b9d_bb0c_5c487659691a.slice/crio-e7b4b194ef042a28036832a666cfba7e94711668d0bfc32ab0d45edb956b6341 WatchSource:0}: Error finding container e7b4b194ef042a28036832a666cfba7e94711668d0bfc32ab0d45edb956b6341: Status 404 returned error can't find the container with id e7b4b194ef042a28036832a666cfba7e94711668d0bfc32ab0d45edb956b6341 Jan 24 00:20:34 crc kubenswrapper[4676]: W0124 00:20:34.050240 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f21d906_2974_4e56_b11f_d888e452c565.slice/crio-089f8e4a35ef84364abbbae1879307f5831ba1a07f81a4231c6cfc6207ff2e6a WatchSource:0}: Error finding container 089f8e4a35ef84364abbbae1879307f5831ba1a07f81a4231c6cfc6207ff2e6a: Status 404 returned error can't find the container with id 089f8e4a35ef84364abbbae1879307f5831ba1a07f81a4231c6cfc6207ff2e6a Jan 24 00:20:34 crc kubenswrapper[4676]: I0124 00:20:34.266917 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b500aa14-7f1d-4209-b0b7-619a37f58bcd" path="/var/lib/kubelet/pods/b500aa14-7f1d-4209-b0b7-619a37f58bcd/volumes" Jan 24 00:20:34 crc kubenswrapper[4676]: I0124 00:20:34.267347 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b577ffd6-6d01-45d7-98e5-8c21ac8de280" path="/var/lib/kubelet/pods/b577ffd6-6d01-45d7-98e5-8c21ac8de280/volumes" Jan 24 00:20:34 crc kubenswrapper[4676]: I0124 00:20:34.848770 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbwwk" event={"ID":"1f21d906-2974-4e56-b11f-d888e452c565","Type":"ContainerStarted","Data":"089f8e4a35ef84364abbbae1879307f5831ba1a07f81a4231c6cfc6207ff2e6a"} Jan 24 00:20:34 crc kubenswrapper[4676]: I0124 00:20:34.852503 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mq6ns" event={"ID":"21550436-cd71-46d8-838e-c51c19ddf8ff","Type":"ContainerStarted","Data":"bb815329b234bedc0ff99ce46c1b957b19b8f16ce6d1ed4c91a62db36d7a60a6"} Jan 24 00:20:34 crc kubenswrapper[4676]: I0124 00:20:34.854735 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"19365292-50d8-4e94-952f-2df7ee20f0ba","Type":"ContainerStarted","Data":"cc6ddc1cfea59e40d8c120b72119e121b7779ff22cf30be7174f4f8d06cdda0d"} Jan 24 00:20:34 crc kubenswrapper[4676]: I0124 00:20:34.866753 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" event={"ID":"88763e6b-93f1-4b9d-bb0c-5c487659691a","Type":"ContainerStarted","Data":"e7b4b194ef042a28036832a666cfba7e94711668d0bfc32ab0d45edb956b6341"} Jan 24 00:20:34 crc kubenswrapper[4676]: I0124 00:20:34.885732 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvzq7" event={"ID":"3df4276b-01b4-403f-b040-85d5a8a9ef03","Type":"ContainerStarted","Data":"4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055"} Jan 24 00:20:35 crc kubenswrapper[4676]: I0124 00:20:35.895489 4676 generic.go:334] "Generic (PLEG): container finished" podID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerID="4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055" exitCode=0 Jan 24 00:20:35 crc kubenswrapper[4676]: I0124 00:20:35.895539 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvzq7" event={"ID":"3df4276b-01b4-403f-b040-85d5a8a9ef03","Type":"ContainerDied","Data":"4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055"} Jan 24 00:20:35 crc kubenswrapper[4676]: I0124 00:20:35.900060 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3bff778a-b10f-4ba9-a12f-f4086608fd30","Type":"ContainerStarted","Data":"1d968960939a9b7e1bd096a290e2ee13a9dbd2ba175fc9b933b94751653a84c4"} Jan 24 00:20:35 crc kubenswrapper[4676]: I0124 00:20:35.906343 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3648c65-9fcb-4a9e-b4cb-d8437dc00141","Type":"ContainerStarted","Data":"8e9154fa93ad06490aa55f2202981ee1da8c0a43309a0b2dce39315135172a68"} Jan 24 00:20:35 crc kubenswrapper[4676]: I0124 00:20:35.910319 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" event={"ID":"88763e6b-93f1-4b9d-bb0c-5c487659691a","Type":"ContainerStarted","Data":"3928e7c189d053145e8953aee775f48c10df19c47e391d85edfeeb0d85f27e07"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.933947 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2bbaae64-ac2d-43c6-8984-5483f2eb4211","Type":"ContainerStarted","Data":"c0a4e7b27fb0d5d3a378d4a19e1e7909edf293a37101b8ea6af0ed481d778150"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.937470 4676 generic.go:334] "Generic (PLEG): container finished" podID="1f21d906-2974-4e56-b11f-d888e452c565" containerID="d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d" exitCode=0 Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.937572 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbwwk" event={"ID":"1f21d906-2974-4e56-b11f-d888e452c565","Type":"ContainerDied","Data":"d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.942615 4676 generic.go:334] "Generic (PLEG): container finished" podID="427fbd2d-16ef-44a6-a71d-8172f56b863d" containerID="3fb2cc055bd476a6bee8209aff0183f3759ee7543512760cfe3f629dbaf87a3e" exitCode=0 Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.943309 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sl9q" event={"ID":"427fbd2d-16ef-44a6-a71d-8172f56b863d","Type":"ContainerDied","Data":"3fb2cc055bd476a6bee8209aff0183f3759ee7543512760cfe3f629dbaf87a3e"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.944795 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68d6466c-a6ff-40ba-952d-007b14efdfd3","Type":"ContainerStarted","Data":"0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.950053 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36558de2-6aac-43e9-832d-2f96c46e8152","Type":"ContainerStarted","Data":"9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.956226 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c626d726-d406-41c7-8e66-07cc148d3aa7","Type":"ContainerStarted","Data":"0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.956488 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.960734 4676 generic.go:334] "Generic (PLEG): container finished" podID="88763e6b-93f1-4b9d-bb0c-5c487659691a" containerID="3928e7c189d053145e8953aee775f48c10df19c47e391d85edfeeb0d85f27e07" exitCode=0 Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.960793 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" event={"ID":"88763e6b-93f1-4b9d-bb0c-5c487659691a","Type":"ContainerDied","Data":"3928e7c189d053145e8953aee775f48c10df19c47e391d85edfeeb0d85f27e07"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.966917 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsd4n" event={"ID":"7adbcf83-efbd-4e8d-97e5-f8768463284a","Type":"ContainerStarted","Data":"aa2386ed4dd4a9df8b7b6ed99a2ba0c46a71bd82d2dedf15a8a7c0f47a77fc04"} Jan 24 00:20:36 crc kubenswrapper[4676]: I0124 00:20:36.967237 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jsd4n" Jan 24 00:20:37 crc kubenswrapper[4676]: I0124 00:20:37.018739 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=28.916833852 podStartE2EDuration="37.018718931s" podCreationTimestamp="2026-01-24 00:20:00 +0000 UTC" firstStartedPulling="2026-01-24 00:20:26.584554722 +0000 UTC m=+1010.614525713" lastFinishedPulling="2026-01-24 00:20:34.686439791 +0000 UTC m=+1018.716410792" observedRunningTime="2026-01-24 00:20:37.007615321 +0000 UTC m=+1021.037586322" watchObservedRunningTime="2026-01-24 00:20:37.018718931 +0000 UTC m=+1021.048689932" Jan 24 00:20:38 crc kubenswrapper[4676]: I0124 00:20:38.983775 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3648c65-9fcb-4a9e-b4cb-d8437dc00141","Type":"ContainerStarted","Data":"9b2d3b0af63cc6c86788a0a84d14fea1f9e0a49f50cde256e1e22273997d0ab3"} Jan 24 00:20:38 crc kubenswrapper[4676]: I0124 00:20:38.987275 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mq6ns" event={"ID":"21550436-cd71-46d8-838e-c51c19ddf8ff","Type":"ContainerStarted","Data":"e436b0e9509a2ae3292b89ad33ee395a20f24bc93aff141bdbc6165bd9ff3e39"} Jan 24 00:20:38 crc kubenswrapper[4676]: I0124 00:20:38.990396 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbwwk" event={"ID":"1f21d906-2974-4e56-b11f-d888e452c565","Type":"ContainerStarted","Data":"577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be"} Jan 24 00:20:38 crc kubenswrapper[4676]: I0124 00:20:38.990845 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:38 crc kubenswrapper[4676]: I0124 00:20:38.993164 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sl9q" event={"ID":"427fbd2d-16ef-44a6-a71d-8172f56b863d","Type":"ContainerStarted","Data":"91def523d5613035da540e2d411ee58710a4e89b8fd9300a4c87e35e1418e471"} Jan 24 00:20:38 crc kubenswrapper[4676]: I0124 00:20:38.993283 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sl9q" event={"ID":"427fbd2d-16ef-44a6-a71d-8172f56b863d","Type":"ContainerStarted","Data":"2c6c08b72e23b0a66e2f98c4164c911ad5d2b982503fa257694899414db35c29"} Jan 24 00:20:38 crc kubenswrapper[4676]: I0124 00:20:38.993356 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:38 crc kubenswrapper[4676]: I0124 00:20:38.993490 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.000670 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" event={"ID":"88763e6b-93f1-4b9d-bb0c-5c487659691a","Type":"ContainerStarted","Data":"d0d6ce007660b208b078333ef0f16c59301681c150c8974f5c9cb45ac7fa6ae6"} Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.000971 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.004236 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvzq7" event={"ID":"3df4276b-01b4-403f-b040-85d5a8a9ef03","Type":"ContainerStarted","Data":"23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634"} Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.011306 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3bff778a-b10f-4ba9-a12f-f4086608fd30","Type":"ContainerStarted","Data":"99b430e598fa74475861fe6f1ff84e27aa9e920cfbaf9c8ce0d53faadf5beb79"} Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.012162 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jsd4n" podStartSLOduration=27.932326034 podStartE2EDuration="35.012145975s" podCreationTimestamp="2026-01-24 00:20:04 +0000 UTC" firstStartedPulling="2026-01-24 00:20:26.578651311 +0000 UTC m=+1010.608622312" lastFinishedPulling="2026-01-24 00:20:33.658471252 +0000 UTC m=+1017.688442253" observedRunningTime="2026-01-24 00:20:37.080494503 +0000 UTC m=+1021.110465524" watchObservedRunningTime="2026-01-24 00:20:39.012145975 +0000 UTC m=+1023.042116976" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.013495 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.84420677 podStartE2EDuration="33.013488545s" podCreationTimestamp="2026-01-24 00:20:06 +0000 UTC" firstStartedPulling="2026-01-24 00:20:25.219237422 +0000 UTC m=+1009.249208423" lastFinishedPulling="2026-01-24 00:20:38.388519197 +0000 UTC m=+1022.418490198" observedRunningTime="2026-01-24 00:20:39.007282885 +0000 UTC m=+1023.037253886" watchObservedRunningTime="2026-01-24 00:20:39.013488545 +0000 UTC m=+1023.043459546" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.042155 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6sl9q" podStartSLOduration=27.155360452 podStartE2EDuration="34.042134272s" podCreationTimestamp="2026-01-24 00:20:05 +0000 UTC" firstStartedPulling="2026-01-24 00:20:26.637547144 +0000 UTC m=+1010.667518155" lastFinishedPulling="2026-01-24 00:20:33.524320954 +0000 UTC m=+1017.554291975" observedRunningTime="2026-01-24 00:20:39.034796388 +0000 UTC m=+1023.064767389" watchObservedRunningTime="2026-01-24 00:20:39.042134272 +0000 UTC m=+1023.072105273" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.062274 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mq6ns" podStartSLOduration=5.653086369 podStartE2EDuration="10.062259089s" podCreationTimestamp="2026-01-24 00:20:29 +0000 UTC" firstStartedPulling="2026-01-24 00:20:33.957294572 +0000 UTC m=+1017.987265573" lastFinishedPulling="2026-01-24 00:20:38.366467292 +0000 UTC m=+1022.396438293" observedRunningTime="2026-01-24 00:20:39.059585217 +0000 UTC m=+1023.089556218" watchObservedRunningTime="2026-01-24 00:20:39.062259089 +0000 UTC m=+1023.092230090" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.126212 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" podStartSLOduration=9.449843346 podStartE2EDuration="10.126193177s" podCreationTimestamp="2026-01-24 00:20:29 +0000 UTC" firstStartedPulling="2026-01-24 00:20:34.051524578 +0000 UTC m=+1018.081495579" lastFinishedPulling="2026-01-24 00:20:34.727874409 +0000 UTC m=+1018.757845410" observedRunningTime="2026-01-24 00:20:39.120620616 +0000 UTC m=+1023.150591617" watchObservedRunningTime="2026-01-24 00:20:39.126193177 +0000 UTC m=+1023.156164178" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.139370 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kvzq7" podStartSLOduration=26.654079484 podStartE2EDuration="37.139353009s" podCreationTimestamp="2026-01-24 00:20:02 +0000 UTC" firstStartedPulling="2026-01-24 00:20:27.770183658 +0000 UTC m=+1011.800154659" lastFinishedPulling="2026-01-24 00:20:38.255457183 +0000 UTC m=+1022.285428184" observedRunningTime="2026-01-24 00:20:39.136072639 +0000 UTC m=+1023.166043640" watchObservedRunningTime="2026-01-24 00:20:39.139353009 +0000 UTC m=+1023.169324010" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.169861 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-sbwwk" podStartSLOduration=8.533119465 podStartE2EDuration="9.169842143s" podCreationTimestamp="2026-01-24 00:20:30 +0000 UTC" firstStartedPulling="2026-01-24 00:20:34.052059794 +0000 UTC m=+1018.082030795" lastFinishedPulling="2026-01-24 00:20:34.688782472 +0000 UTC m=+1018.718753473" observedRunningTime="2026-01-24 00:20:39.164272352 +0000 UTC m=+1023.194243353" watchObservedRunningTime="2026-01-24 00:20:39.169842143 +0000 UTC m=+1023.199813144" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.187409 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.390911284 podStartE2EDuration="35.18739233s" podCreationTimestamp="2026-01-24 00:20:04 +0000 UTC" firstStartedPulling="2026-01-24 00:20:26.584788149 +0000 UTC m=+1010.614759160" lastFinishedPulling="2026-01-24 00:20:38.381269195 +0000 UTC m=+1022.411240206" observedRunningTime="2026-01-24 00:20:39.182971996 +0000 UTC m=+1023.212942997" watchObservedRunningTime="2026-01-24 00:20:39.18739233 +0000 UTC m=+1023.217363331" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.223883 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:39 crc kubenswrapper[4676]: I0124 00:20:39.266927 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.021410 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.068886 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.792033 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-qhgmq"] Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.842436 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-88xjq"] Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.843834 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.860174 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-88xjq"] Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.927429 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpslv\" (UniqueName: \"kubernetes.io/projected/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-kube-api-access-lpslv\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.927914 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.928010 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-config\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.928155 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:40 crc kubenswrapper[4676]: I0124 00:20:40.928247 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.035061 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.035113 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpslv\" (UniqueName: \"kubernetes.io/projected/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-kube-api-access-lpslv\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.035140 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.035171 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-config\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.035247 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.036057 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.036529 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.036598 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.037061 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-config\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.038900 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" podUID="88763e6b-93f1-4b9d-bb0c-5c487659691a" containerName="dnsmasq-dns" containerID="cri-o://d0d6ce007660b208b078333ef0f16c59301681c150c8974f5c9cb45ac7fa6ae6" gracePeriod=10 Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.052742 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.060919 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpslv\" (UniqueName: \"kubernetes.io/projected/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-kube-api-access-lpslv\") pod \"dnsmasq-dns-b8fbc5445-88xjq\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.111761 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.173314 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:41 crc kubenswrapper[4676]: I0124 00:20:41.662814 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-88xjq"] Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.020651 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.032005 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.034150 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.034837 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6ts8l" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.034879 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.035638 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.052759 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" event={"ID":"1905ca79-a4c4-4286-8d88-2855e7b9ba4c","Type":"ContainerStarted","Data":"b390252d36e77eb3a82a5578bfec5a23efd8e93f3aaecb313b5f2fd7e776552d"} Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.055485 4676 generic.go:334] "Generic (PLEG): container finished" podID="88763e6b-93f1-4b9d-bb0c-5c487659691a" containerID="d0d6ce007660b208b078333ef0f16c59301681c150c8974f5c9cb45ac7fa6ae6" exitCode=0 Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.055767 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" event={"ID":"88763e6b-93f1-4b9d-bb0c-5c487659691a","Type":"ContainerDied","Data":"d0d6ce007660b208b078333ef0f16c59301681c150c8974f5c9cb45ac7fa6ae6"} Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.056322 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.059207 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.102672 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.153015 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4620e725-9218-461b-a56d-104bcb7f1df4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.153054 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmrgd\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-kube-api-access-vmrgd\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.153657 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.153944 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4620e725-9218-461b-a56d-104bcb7f1df4-lock\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.154273 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4620e725-9218-461b-a56d-104bcb7f1df4-cache\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.154314 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.257488 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.257589 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4620e725-9218-461b-a56d-104bcb7f1df4-lock\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.257625 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4620e725-9218-461b-a56d-104bcb7f1df4-cache\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: E0124 00:20:42.257633 4676 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.257648 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.257674 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4620e725-9218-461b-a56d-104bcb7f1df4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.257695 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmrgd\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-kube-api-access-vmrgd\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: E0124 00:20:42.257654 4676 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 00:20:42 crc kubenswrapper[4676]: E0124 00:20:42.257800 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift podName:4620e725-9218-461b-a56d-104bcb7f1df4 nodeName:}" failed. No retries permitted until 2026-01-24 00:20:42.757773062 +0000 UTC m=+1026.787744063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift") pod "swift-storage-0" (UID: "4620e725-9218-461b-a56d-104bcb7f1df4") : configmap "swift-ring-files" not found Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.258078 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4620e725-9218-461b-a56d-104bcb7f1df4-lock\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.258086 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4620e725-9218-461b-a56d-104bcb7f1df4-cache\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.258273 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.268724 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4620e725-9218-461b-a56d-104bcb7f1df4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.277251 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmrgd\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-kube-api-access-vmrgd\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.282834 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.567486 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.567541 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.703609 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.705102 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.709223 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 24 00:20:42 crc kubenswrapper[4676]: W0124 00:20:42.709421 4676 reflector.go:561] object-"openstack"/"cert-ovnnorthd-ovndbs": failed to list *v1.Secret: secrets "cert-ovnnorthd-ovndbs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 24 00:20:42 crc kubenswrapper[4676]: E0124 00:20:42.709445 4676 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovnnorthd-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-ovnnorthd-ovndbs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.709985 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-g286t" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.719128 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.732530 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.766354 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1486ea92-d267-49cd-8516-d474ef25c2df-config\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.766501 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.766552 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.766593 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1486ea92-d267-49cd-8516-d474ef25c2df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.766631 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2qll\" (UniqueName: \"kubernetes.io/projected/1486ea92-d267-49cd-8516-d474ef25c2df-kube-api-access-q2qll\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.766676 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1486ea92-d267-49cd-8516-d474ef25c2df-scripts\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.766701 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.766730 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: E0124 00:20:42.766973 4676 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 00:20:42 crc kubenswrapper[4676]: E0124 00:20:42.767506 4676 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 00:20:42 crc kubenswrapper[4676]: E0124 00:20:42.767762 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift podName:4620e725-9218-461b-a56d-104bcb7f1df4 nodeName:}" failed. No retries permitted until 2026-01-24 00:20:43.767743509 +0000 UTC m=+1027.797714510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift") pod "swift-storage-0" (UID: "4620e725-9218-461b-a56d-104bcb7f1df4") : configmap "swift-ring-files" not found Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.867838 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2qll\" (UniqueName: \"kubernetes.io/projected/1486ea92-d267-49cd-8516-d474ef25c2df-kube-api-access-q2qll\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.867906 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1486ea92-d267-49cd-8516-d474ef25c2df-scripts\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.867931 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.867957 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.867975 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1486ea92-d267-49cd-8516-d474ef25c2df-config\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.868042 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.868077 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1486ea92-d267-49cd-8516-d474ef25c2df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.868503 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1486ea92-d267-49cd-8516-d474ef25c2df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.869407 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1486ea92-d267-49cd-8516-d474ef25c2df-scripts\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.870798 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1486ea92-d267-49cd-8516-d474ef25c2df-config\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.890841 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.891483 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:42 crc kubenswrapper[4676]: I0124 00:20:42.895435 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2qll\" (UniqueName: \"kubernetes.io/projected/1486ea92-d267-49cd-8516-d474ef25c2df-kube-api-access-q2qll\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.070442 4676 generic.go:334] "Generic (PLEG): container finished" podID="19365292-50d8-4e94-952f-2df7ee20f0ba" containerID="cc6ddc1cfea59e40d8c120b72119e121b7779ff22cf30be7174f4f8d06cdda0d" exitCode=0 Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.070503 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"19365292-50d8-4e94-952f-2df7ee20f0ba","Type":"ContainerDied","Data":"cc6ddc1cfea59e40d8c120b72119e121b7779ff22cf30be7174f4f8d06cdda0d"} Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.075629 4676 generic.go:334] "Generic (PLEG): container finished" podID="2bbaae64-ac2d-43c6-8984-5483f2eb4211" containerID="c0a4e7b27fb0d5d3a378d4a19e1e7909edf293a37101b8ea6af0ed481d778150" exitCode=0 Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.076105 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2bbaae64-ac2d-43c6-8984-5483f2eb4211","Type":"ContainerDied","Data":"c0a4e7b27fb0d5d3a378d4a19e1e7909edf293a37101b8ea6af0ed481d778150"} Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.622575 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kvzq7" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="registry-server" probeResult="failure" output=< Jan 24 00:20:43 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:20:43 crc kubenswrapper[4676]: > Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.779474 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:43 crc kubenswrapper[4676]: E0124 00:20:43.779639 4676 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 00:20:43 crc kubenswrapper[4676]: E0124 00:20:43.779665 4676 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 00:20:43 crc kubenswrapper[4676]: E0124 00:20:43.779732 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift podName:4620e725-9218-461b-a56d-104bcb7f1df4 nodeName:}" failed. No retries permitted until 2026-01-24 00:20:45.779709778 +0000 UTC m=+1029.809680789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift") pod "swift-storage-0" (UID: "4620e725-9218-461b-a56d-104bcb7f1df4") : configmap "swift-ring-files" not found Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.815573 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.825935 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1486ea92-d267-49cd-8516-d474ef25c2df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1486ea92-d267-49cd-8516-d474ef25c2df\") " pod="openstack/ovn-northd-0" Jan 24 00:20:43 crc kubenswrapper[4676]: I0124 00:20:43.960488 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.105411 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"19365292-50d8-4e94-952f-2df7ee20f0ba","Type":"ContainerStarted","Data":"e6c89b124d66353c91ee1aa9ed1ccee9c8d066f5478a92d37d4d5862729ffec2"} Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.126415 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2bbaae64-ac2d-43c6-8984-5483f2eb4211","Type":"ContainerStarted","Data":"25d79390127c82f1424411b19f4ba1ae769f32f2036b81b6574e43dee5a1980a"} Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.135466 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=41.086598672 podStartE2EDuration="47.135450092s" podCreationTimestamp="2026-01-24 00:19:57 +0000 UTC" firstStartedPulling="2026-01-24 00:20:26.637701799 +0000 UTC m=+1010.667672800" lastFinishedPulling="2026-01-24 00:20:32.686553229 +0000 UTC m=+1016.716524220" observedRunningTime="2026-01-24 00:20:44.13085249 +0000 UTC m=+1028.160823491" watchObservedRunningTime="2026-01-24 00:20:44.135450092 +0000 UTC m=+1028.165421093" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.162297 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371988.692497 podStartE2EDuration="48.162279163s" podCreationTimestamp="2026-01-24 00:19:56 +0000 UTC" firstStartedPulling="2026-01-24 00:20:10.695446279 +0000 UTC m=+994.725417310" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:20:44.157763265 +0000 UTC m=+1028.187734266" watchObservedRunningTime="2026-01-24 00:20:44.162279163 +0000 UTC m=+1028.192250164" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.474849 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.477896 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:44 crc kubenswrapper[4676]: W0124 00:20:44.482625 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1486ea92_d267_49cd_8516_d474ef25c2df.slice/crio-184c22321dc7f5e257d71d3ac4bd871cbe4616dccc9cc9b42dad26475bb88d2e WatchSource:0}: Error finding container 184c22321dc7f5e257d71d3ac4bd871cbe4616dccc9cc9b42dad26475bb88d2e: Status 404 returned error can't find the container with id 184c22321dc7f5e257d71d3ac4bd871cbe4616dccc9cc9b42dad26475bb88d2e Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.594566 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-config\") pod \"88763e6b-93f1-4b9d-bb0c-5c487659691a\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.594610 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx7ts\" (UniqueName: \"kubernetes.io/projected/88763e6b-93f1-4b9d-bb0c-5c487659691a-kube-api-access-gx7ts\") pod \"88763e6b-93f1-4b9d-bb0c-5c487659691a\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.594700 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-ovsdbserver-nb\") pod \"88763e6b-93f1-4b9d-bb0c-5c487659691a\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.594798 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-dns-svc\") pod \"88763e6b-93f1-4b9d-bb0c-5c487659691a\" (UID: \"88763e6b-93f1-4b9d-bb0c-5c487659691a\") " Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.629541 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88763e6b-93f1-4b9d-bb0c-5c487659691a-kube-api-access-gx7ts" (OuterVolumeSpecName: "kube-api-access-gx7ts") pod "88763e6b-93f1-4b9d-bb0c-5c487659691a" (UID: "88763e6b-93f1-4b9d-bb0c-5c487659691a"). InnerVolumeSpecName "kube-api-access-gx7ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.656527 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-config" (OuterVolumeSpecName: "config") pod "88763e6b-93f1-4b9d-bb0c-5c487659691a" (UID: "88763e6b-93f1-4b9d-bb0c-5c487659691a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.664633 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88763e6b-93f1-4b9d-bb0c-5c487659691a" (UID: "88763e6b-93f1-4b9d-bb0c-5c487659691a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.674866 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88763e6b-93f1-4b9d-bb0c-5c487659691a" (UID: "88763e6b-93f1-4b9d-bb0c-5c487659691a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.696182 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.696215 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.696228 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx7ts\" (UniqueName: \"kubernetes.io/projected/88763e6b-93f1-4b9d-bb0c-5c487659691a-kube-api-access-gx7ts\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:44 crc kubenswrapper[4676]: I0124 00:20:44.696241 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88763e6b-93f1-4b9d-bb0c-5c487659691a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.134727 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" event={"ID":"88763e6b-93f1-4b9d-bb0c-5c487659691a","Type":"ContainerDied","Data":"e7b4b194ef042a28036832a666cfba7e94711668d0bfc32ab0d45edb956b6341"} Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.135010 4676 scope.go:117] "RemoveContainer" containerID="d0d6ce007660b208b078333ef0f16c59301681c150c8974f5c9cb45ac7fa6ae6" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.134781 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-qhgmq" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.136780 4676 generic.go:334] "Generic (PLEG): container finished" podID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerID="837bffae0bd65cfec1fbd0e46c064b367c74a4b7d5a87b7e24ab722d34c34bd4" exitCode=0 Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.136864 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" event={"ID":"1905ca79-a4c4-4286-8d88-2855e7b9ba4c","Type":"ContainerDied","Data":"837bffae0bd65cfec1fbd0e46c064b367c74a4b7d5a87b7e24ab722d34c34bd4"} Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.139880 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1486ea92-d267-49cd-8516-d474ef25c2df","Type":"ContainerStarted","Data":"184c22321dc7f5e257d71d3ac4bd871cbe4616dccc9cc9b42dad26475bb88d2e"} Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.152974 4676 scope.go:117] "RemoveContainer" containerID="3928e7c189d053145e8953aee775f48c10df19c47e391d85edfeeb0d85f27e07" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.307584 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-qhgmq"] Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.313483 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-qhgmq"] Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.714591 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.821348 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:45 crc kubenswrapper[4676]: E0124 00:20:45.823065 4676 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 00:20:45 crc kubenswrapper[4676]: E0124 00:20:45.823093 4676 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 00:20:45 crc kubenswrapper[4676]: E0124 00:20:45.823137 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift podName:4620e725-9218-461b-a56d-104bcb7f1df4 nodeName:}" failed. No retries permitted until 2026-01-24 00:20:49.823122142 +0000 UTC m=+1033.853093143 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift") pod "swift-storage-0" (UID: "4620e725-9218-461b-a56d-104bcb7f1df4") : configmap "swift-ring-files" not found Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.885892 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5fmzb"] Jan 24 00:20:45 crc kubenswrapper[4676]: E0124 00:20:45.886198 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88763e6b-93f1-4b9d-bb0c-5c487659691a" containerName="init" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.886209 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="88763e6b-93f1-4b9d-bb0c-5c487659691a" containerName="init" Jan 24 00:20:45 crc kubenswrapper[4676]: E0124 00:20:45.886237 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88763e6b-93f1-4b9d-bb0c-5c487659691a" containerName="dnsmasq-dns" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.886243 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="88763e6b-93f1-4b9d-bb0c-5c487659691a" containerName="dnsmasq-dns" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.886394 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="88763e6b-93f1-4b9d-bb0c-5c487659691a" containerName="dnsmasq-dns" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.886824 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.898611 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.902720 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.905811 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 24 00:20:45 crc kubenswrapper[4676]: I0124 00:20:45.911728 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5fmzb"] Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.024333 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-combined-ca-bundle\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.024393 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zd5k\" (UniqueName: \"kubernetes.io/projected/4ff61f48-e451-47e8-adcc-0870b29d28a9-kube-api-access-2zd5k\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.024434 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ff61f48-e451-47e8-adcc-0870b29d28a9-etc-swift\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.024459 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-ring-data-devices\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.024502 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-scripts\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.024531 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-swiftconf\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.024563 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-dispersionconf\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.126107 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zd5k\" (UniqueName: \"kubernetes.io/projected/4ff61f48-e451-47e8-adcc-0870b29d28a9-kube-api-access-2zd5k\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.126406 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ff61f48-e451-47e8-adcc-0870b29d28a9-etc-swift\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.126434 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-ring-data-devices\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.126482 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-scripts\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.126514 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-swiftconf\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.126545 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-dispersionconf\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.126601 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-combined-ca-bundle\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.127056 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-ring-data-devices\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.127300 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ff61f48-e451-47e8-adcc-0870b29d28a9-etc-swift\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.127775 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-scripts\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.130809 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-combined-ca-bundle\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.131518 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-dispersionconf\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.131594 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-swiftconf\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.146018 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zd5k\" (UniqueName: \"kubernetes.io/projected/4ff61f48-e451-47e8-adcc-0870b29d28a9-kube-api-access-2zd5k\") pod \"swift-ring-rebalance-5fmzb\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.147035 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" event={"ID":"1905ca79-a4c4-4286-8d88-2855e7b9ba4c","Type":"ContainerStarted","Data":"1d6d0279f417026b6ba6bd78b2b2fd59bc265f683401ca01652ddd44d6b071a2"} Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.147238 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.168622 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" podStartSLOduration=6.168606171 podStartE2EDuration="6.168606171s" podCreationTimestamp="2026-01-24 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:20:46.163167375 +0000 UTC m=+1030.193138376" watchObservedRunningTime="2026-01-24 00:20:46.168606171 +0000 UTC m=+1030.198577172" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.219405 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.278965 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88763e6b-93f1-4b9d-bb0c-5c487659691a" path="/var/lib/kubelet/pods/88763e6b-93f1-4b9d-bb0c-5c487659691a/volumes" Jan 24 00:20:46 crc kubenswrapper[4676]: W0124 00:20:46.720593 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ff61f48_e451_47e8_adcc_0870b29d28a9.slice/crio-edae868ed3f37d7cd803c42347470b3ff0f9857923570cef0ccdd12a7ade323b WatchSource:0}: Error finding container edae868ed3f37d7cd803c42347470b3ff0f9857923570cef0ccdd12a7ade323b: Status 404 returned error can't find the container with id edae868ed3f37d7cd803c42347470b3ff0f9857923570cef0ccdd12a7ade323b Jan 24 00:20:46 crc kubenswrapper[4676]: I0124 00:20:46.721534 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5fmzb"] Jan 24 00:20:47 crc kubenswrapper[4676]: I0124 00:20:47.162101 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1486ea92-d267-49cd-8516-d474ef25c2df","Type":"ContainerStarted","Data":"2bc81f0c9e99c0a686f722f292e67c5c773c0f09646441418ba3c8b48bf7b43d"} Jan 24 00:20:47 crc kubenswrapper[4676]: I0124 00:20:47.162148 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1486ea92-d267-49cd-8516-d474ef25c2df","Type":"ContainerStarted","Data":"0d6819a80061f8595b56a96c4f3467bde91ad0c094f9c254dd9411a388f0a563"} Jan 24 00:20:47 crc kubenswrapper[4676]: I0124 00:20:47.162202 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 24 00:20:47 crc kubenswrapper[4676]: I0124 00:20:47.163049 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5fmzb" event={"ID":"4ff61f48-e451-47e8-adcc-0870b29d28a9","Type":"ContainerStarted","Data":"edae868ed3f37d7cd803c42347470b3ff0f9857923570cef0ccdd12a7ade323b"} Jan 24 00:20:47 crc kubenswrapper[4676]: I0124 00:20:47.182815 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.717620582 podStartE2EDuration="5.182799748s" podCreationTimestamp="2026-01-24 00:20:42 +0000 UTC" firstStartedPulling="2026-01-24 00:20:44.485292465 +0000 UTC m=+1028.515263466" lastFinishedPulling="2026-01-24 00:20:45.950471631 +0000 UTC m=+1029.980442632" observedRunningTime="2026-01-24 00:20:47.178661022 +0000 UTC m=+1031.208632083" watchObservedRunningTime="2026-01-24 00:20:47.182799748 +0000 UTC m=+1031.212770749" Jan 24 00:20:47 crc kubenswrapper[4676]: I0124 00:20:47.432019 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 24 00:20:47 crc kubenswrapper[4676]: I0124 00:20:47.432088 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 24 00:20:48 crc kubenswrapper[4676]: I0124 00:20:48.639335 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 24 00:20:48 crc kubenswrapper[4676]: I0124 00:20:48.639706 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 24 00:20:49 crc kubenswrapper[4676]: I0124 00:20:49.747489 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 24 00:20:49 crc kubenswrapper[4676]: I0124 00:20:49.828834 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 24 00:20:49 crc kubenswrapper[4676]: I0124 00:20:49.900211 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:49 crc kubenswrapper[4676]: E0124 00:20:49.901127 4676 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 00:20:49 crc kubenswrapper[4676]: E0124 00:20:49.901147 4676 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 00:20:49 crc kubenswrapper[4676]: E0124 00:20:49.901184 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift podName:4620e725-9218-461b-a56d-104bcb7f1df4 nodeName:}" failed. No retries permitted until 2026-01-24 00:20:57.901169691 +0000 UTC m=+1041.931140692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift") pod "swift-storage-0" (UID: "4620e725-9218-461b-a56d-104bcb7f1df4") : configmap "swift-ring-files" not found Jan 24 00:20:50 crc kubenswrapper[4676]: I0124 00:20:50.676626 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 24 00:20:50 crc kubenswrapper[4676]: I0124 00:20:50.953905 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.042789 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.174519 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.193183 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5fmzb" event={"ID":"4ff61f48-e451-47e8-adcc-0870b29d28a9","Type":"ContainerStarted","Data":"a2edcf50a838124e8087fc219bf5e4c2e5444fe834157a78116049634218ec2a"} Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.228839 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5fmzb" podStartSLOduration=2.221291726 podStartE2EDuration="6.228822497s" podCreationTimestamp="2026-01-24 00:20:45 +0000 UTC" firstStartedPulling="2026-01-24 00:20:46.722804612 +0000 UTC m=+1030.752775623" lastFinishedPulling="2026-01-24 00:20:50.730335392 +0000 UTC m=+1034.760306394" observedRunningTime="2026-01-24 00:20:51.218615955 +0000 UTC m=+1035.248586956" watchObservedRunningTime="2026-01-24 00:20:51.228822497 +0000 UTC m=+1035.258793498" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.247013 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbwwk"] Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.247450 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-sbwwk" podUID="1f21d906-2974-4e56-b11f-d888e452c565" containerName="dnsmasq-dns" containerID="cri-o://577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be" gracePeriod=10 Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.666769 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.739187 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btl5m\" (UniqueName: \"kubernetes.io/projected/1f21d906-2974-4e56-b11f-d888e452c565-kube-api-access-btl5m\") pod \"1f21d906-2974-4e56-b11f-d888e452c565\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.739297 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-nb\") pod \"1f21d906-2974-4e56-b11f-d888e452c565\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.739334 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-sb\") pod \"1f21d906-2974-4e56-b11f-d888e452c565\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.740100 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-dns-svc\") pod \"1f21d906-2974-4e56-b11f-d888e452c565\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.740117 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-config\") pod \"1f21d906-2974-4e56-b11f-d888e452c565\" (UID: \"1f21d906-2974-4e56-b11f-d888e452c565\") " Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.745273 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f21d906-2974-4e56-b11f-d888e452c565-kube-api-access-btl5m" (OuterVolumeSpecName: "kube-api-access-btl5m") pod "1f21d906-2974-4e56-b11f-d888e452c565" (UID: "1f21d906-2974-4e56-b11f-d888e452c565"). InnerVolumeSpecName "kube-api-access-btl5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.792451 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-config" (OuterVolumeSpecName: "config") pod "1f21d906-2974-4e56-b11f-d888e452c565" (UID: "1f21d906-2974-4e56-b11f-d888e452c565"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.794928 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f21d906-2974-4e56-b11f-d888e452c565" (UID: "1f21d906-2974-4e56-b11f-d888e452c565"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.810221 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f21d906-2974-4e56-b11f-d888e452c565" (UID: "1f21d906-2974-4e56-b11f-d888e452c565"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.843444 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.843469 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.843478 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btl5m\" (UniqueName: \"kubernetes.io/projected/1f21d906-2974-4e56-b11f-d888e452c565-kube-api-access-btl5m\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.843511 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.852785 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f21d906-2974-4e56-b11f-d888e452c565" (UID: "1f21d906-2974-4e56-b11f-d888e452c565"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:51 crc kubenswrapper[4676]: I0124 00:20:51.945175 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f21d906-2974-4e56-b11f-d888e452c565-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.199764 4676 generic.go:334] "Generic (PLEG): container finished" podID="1f21d906-2974-4e56-b11f-d888e452c565" containerID="577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be" exitCode=0 Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.200465 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbwwk" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.200564 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbwwk" event={"ID":"1f21d906-2974-4e56-b11f-d888e452c565","Type":"ContainerDied","Data":"577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be"} Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.200586 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbwwk" event={"ID":"1f21d906-2974-4e56-b11f-d888e452c565","Type":"ContainerDied","Data":"089f8e4a35ef84364abbbae1879307f5831ba1a07f81a4231c6cfc6207ff2e6a"} Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.200601 4676 scope.go:117] "RemoveContainer" containerID="577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.229348 4676 scope.go:117] "RemoveContainer" containerID="d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.254358 4676 scope.go:117] "RemoveContainer" containerID="577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be" Jan 24 00:20:52 crc kubenswrapper[4676]: E0124 00:20:52.255786 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be\": container with ID starting with 577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be not found: ID does not exist" containerID="577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.255814 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be"} err="failed to get container status \"577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be\": rpc error: code = NotFound desc = could not find container \"577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be\": container with ID starting with 577f50aa5b4437d91afcb97c82c41e9f5ee408e1ba9e1be7852f62684447d5be not found: ID does not exist" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.255839 4676 scope.go:117] "RemoveContainer" containerID="d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d" Jan 24 00:20:52 crc kubenswrapper[4676]: E0124 00:20:52.256039 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d\": container with ID starting with d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d not found: ID does not exist" containerID="d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.256057 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d"} err="failed to get container status \"d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d\": rpc error: code = NotFound desc = could not find container \"d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d\": container with ID starting with d33ce991197b97fff8c8aedf2ecfa7d98c5e7a4c61013dd5a3b38abbab942c0d not found: ID does not exist" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.267235 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbwwk"] Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.267277 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbwwk"] Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.609035 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.650928 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:52 crc kubenswrapper[4676]: I0124 00:20:52.842664 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvzq7"] Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.077237 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d7f6-account-create-update-brcr8"] Jan 24 00:20:54 crc kubenswrapper[4676]: E0124 00:20:54.077803 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f21d906-2974-4e56-b11f-d888e452c565" containerName="dnsmasq-dns" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.077814 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f21d906-2974-4e56-b11f-d888e452c565" containerName="dnsmasq-dns" Jan 24 00:20:54 crc kubenswrapper[4676]: E0124 00:20:54.077831 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f21d906-2974-4e56-b11f-d888e452c565" containerName="init" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.077837 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f21d906-2974-4e56-b11f-d888e452c565" containerName="init" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.077997 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f21d906-2974-4e56-b11f-d888e452c565" containerName="dnsmasq-dns" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.078494 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.080759 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.105956 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d7f6-account-create-update-brcr8"] Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.122636 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ph986"] Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.123874 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ph986" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.153744 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ph986"] Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.181216 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-operator-scripts\") pod \"glance-d7f6-account-create-update-brcr8\" (UID: \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\") " pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.181474 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssl97\" (UniqueName: \"kubernetes.io/projected/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-kube-api-access-ssl97\") pod \"glance-db-create-ph986\" (UID: \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\") " pod="openstack/glance-db-create-ph986" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.181598 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-operator-scripts\") pod \"glance-db-create-ph986\" (UID: \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\") " pod="openstack/glance-db-create-ph986" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.181670 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5qp\" (UniqueName: \"kubernetes.io/projected/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-kube-api-access-md5qp\") pod \"glance-d7f6-account-create-update-brcr8\" (UID: \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\") " pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.213327 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kvzq7" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="registry-server" containerID="cri-o://23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634" gracePeriod=2 Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.268086 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f21d906-2974-4e56-b11f-d888e452c565" path="/var/lib/kubelet/pods/1f21d906-2974-4e56-b11f-d888e452c565/volumes" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.284500 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssl97\" (UniqueName: \"kubernetes.io/projected/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-kube-api-access-ssl97\") pod \"glance-db-create-ph986\" (UID: \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\") " pod="openstack/glance-db-create-ph986" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.285550 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-operator-scripts\") pod \"glance-db-create-ph986\" (UID: \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\") " pod="openstack/glance-db-create-ph986" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.285681 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5qp\" (UniqueName: \"kubernetes.io/projected/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-kube-api-access-md5qp\") pod \"glance-d7f6-account-create-update-brcr8\" (UID: \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\") " pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.285782 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-operator-scripts\") pod \"glance-d7f6-account-create-update-brcr8\" (UID: \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\") " pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.286978 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-operator-scripts\") pod \"glance-d7f6-account-create-update-brcr8\" (UID: \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\") " pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.287111 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-operator-scripts\") pod \"glance-db-create-ph986\" (UID: \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\") " pod="openstack/glance-db-create-ph986" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.317475 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5qp\" (UniqueName: \"kubernetes.io/projected/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-kube-api-access-md5qp\") pod \"glance-d7f6-account-create-update-brcr8\" (UID: \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\") " pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.320460 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssl97\" (UniqueName: \"kubernetes.io/projected/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-kube-api-access-ssl97\") pod \"glance-db-create-ph986\" (UID: \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\") " pod="openstack/glance-db-create-ph986" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.400296 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.448757 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ph986" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.615839 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.695425 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-utilities\") pod \"3df4276b-01b4-403f-b040-85d5a8a9ef03\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.695587 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-catalog-content\") pod \"3df4276b-01b4-403f-b040-85d5a8a9ef03\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.695614 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk82l\" (UniqueName: \"kubernetes.io/projected/3df4276b-01b4-403f-b040-85d5a8a9ef03-kube-api-access-tk82l\") pod \"3df4276b-01b4-403f-b040-85d5a8a9ef03\" (UID: \"3df4276b-01b4-403f-b040-85d5a8a9ef03\") " Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.697659 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-utilities" (OuterVolumeSpecName: "utilities") pod "3df4276b-01b4-403f-b040-85d5a8a9ef03" (UID: "3df4276b-01b4-403f-b040-85d5a8a9ef03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.699561 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.702197 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df4276b-01b4-403f-b040-85d5a8a9ef03-kube-api-access-tk82l" (OuterVolumeSpecName: "kube-api-access-tk82l") pod "3df4276b-01b4-403f-b040-85d5a8a9ef03" (UID: "3df4276b-01b4-403f-b040-85d5a8a9ef03"). InnerVolumeSpecName "kube-api-access-tk82l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.759706 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3df4276b-01b4-403f-b040-85d5a8a9ef03" (UID: "3df4276b-01b4-403f-b040-85d5a8a9ef03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.800837 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4276b-01b4-403f-b040-85d5a8a9ef03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.801087 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk82l\" (UniqueName: \"kubernetes.io/projected/3df4276b-01b4-403f-b040-85d5a8a9ef03-kube-api-access-tk82l\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:54 crc kubenswrapper[4676]: I0124 00:20:54.892704 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d7f6-account-create-update-brcr8"] Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.024040 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ph986"] Jan 24 00:20:55 crc kubenswrapper[4676]: W0124 00:20:55.026073 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod925957f4_9ca1_4e46_b3ac_0dd0ca5713cd.slice/crio-00a687de5e1dc9cd4413233520312cb969bf584cd8c200af2ba5dc7b9782e4f4 WatchSource:0}: Error finding container 00a687de5e1dc9cd4413233520312cb969bf584cd8c200af2ba5dc7b9782e4f4: Status 404 returned error can't find the container with id 00a687de5e1dc9cd4413233520312cb969bf584cd8c200af2ba5dc7b9782e4f4 Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.223651 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ph986" event={"ID":"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd","Type":"ContainerStarted","Data":"a55ff63829260fcafaf5bd727871713740d9ed36c94c4fc218cf31c8bb4fa53b"} Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.223731 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ph986" event={"ID":"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd","Type":"ContainerStarted","Data":"00a687de5e1dc9cd4413233520312cb969bf584cd8c200af2ba5dc7b9782e4f4"} Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.227398 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d7f6-account-create-update-brcr8" event={"ID":"98c25d86-7dc6-4b15-9511-3a7cbf4c7592","Type":"ContainerStarted","Data":"8280acdb35983168359f77fef7c062161338aa47a5bf092b620c75a0dd83736f"} Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.227436 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d7f6-account-create-update-brcr8" event={"ID":"98c25d86-7dc6-4b15-9511-3a7cbf4c7592","Type":"ContainerStarted","Data":"520e95ec58419aeecc535479889b038fb2987a491e8f1735dc7b52a21a89b798"} Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.232237 4676 generic.go:334] "Generic (PLEG): container finished" podID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerID="23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634" exitCode=0 Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.232286 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvzq7" event={"ID":"3df4276b-01b4-403f-b040-85d5a8a9ef03","Type":"ContainerDied","Data":"23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634"} Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.232527 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvzq7" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.232914 4676 scope.go:117] "RemoveContainer" containerID="23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.232309 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvzq7" event={"ID":"3df4276b-01b4-403f-b040-85d5a8a9ef03","Type":"ContainerDied","Data":"88c849f82d3193bf091f531d74d4fde6ab5977e0c495ee6e86efeb15c6348494"} Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.256655 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-ph986" podStartSLOduration=1.256636728 podStartE2EDuration="1.256636728s" podCreationTimestamp="2026-01-24 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:20:55.242498375 +0000 UTC m=+1039.272469376" watchObservedRunningTime="2026-01-24 00:20:55.256636728 +0000 UTC m=+1039.286607729" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.263562 4676 scope.go:117] "RemoveContainer" containerID="4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.272881 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-d7f6-account-create-update-brcr8" podStartSLOduration=1.272864005 podStartE2EDuration="1.272864005s" podCreationTimestamp="2026-01-24 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:20:55.268892184 +0000 UTC m=+1039.298863175" watchObservedRunningTime="2026-01-24 00:20:55.272864005 +0000 UTC m=+1039.302835006" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.289425 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvzq7"] Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.294735 4676 scope.go:117] "RemoveContainer" containerID="82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.317734 4676 scope.go:117] "RemoveContainer" containerID="23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.318615 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kvzq7"] Jan 24 00:20:55 crc kubenswrapper[4676]: E0124 00:20:55.320971 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634\": container with ID starting with 23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634 not found: ID does not exist" containerID="23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.321003 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634"} err="failed to get container status \"23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634\": rpc error: code = NotFound desc = could not find container \"23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634\": container with ID starting with 23f51327ad99c429533212b03ced853abc5e0f0d71914be27d57c58e73611634 not found: ID does not exist" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.321025 4676 scope.go:117] "RemoveContainer" containerID="4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055" Jan 24 00:20:55 crc kubenswrapper[4676]: E0124 00:20:55.321341 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055\": container with ID starting with 4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055 not found: ID does not exist" containerID="4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.321361 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055"} err="failed to get container status \"4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055\": rpc error: code = NotFound desc = could not find container \"4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055\": container with ID starting with 4f4e99bafbb1a85396c9179c36382959eaa8fbb41e0c1ee24af103d0f0f87055 not found: ID does not exist" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.321400 4676 scope.go:117] "RemoveContainer" containerID="82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e" Jan 24 00:20:55 crc kubenswrapper[4676]: E0124 00:20:55.321710 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e\": container with ID starting with 82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e not found: ID does not exist" containerID="82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e" Jan 24 00:20:55 crc kubenswrapper[4676]: I0124 00:20:55.321742 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e"} err="failed to get container status \"82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e\": rpc error: code = NotFound desc = could not find container \"82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e\": container with ID starting with 82d05dea3899aabc4e3fbab8a0c2cac7fba0b90a8cfd043aada7232a1d18050e not found: ID does not exist" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.101994 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5zwrz"] Jan 24 00:20:56 crc kubenswrapper[4676]: E0124 00:20:56.102315 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="registry-server" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.102335 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="registry-server" Jan 24 00:20:56 crc kubenswrapper[4676]: E0124 00:20:56.102461 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="extract-utilities" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.102474 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="extract-utilities" Jan 24 00:20:56 crc kubenswrapper[4676]: E0124 00:20:56.102488 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="extract-content" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.102502 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="extract-content" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.102671 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" containerName="registry-server" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.103223 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zwrz" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.107551 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.121773 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5zwrz"] Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.265753 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320759e3-3504-4bd4-b55a-b8cb664e1ce0-operator-scripts\") pod \"root-account-create-update-5zwrz\" (UID: \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\") " pod="openstack/root-account-create-update-5zwrz" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.265811 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzzm\" (UniqueName: \"kubernetes.io/projected/320759e3-3504-4bd4-b55a-b8cb664e1ce0-kube-api-access-krzzm\") pod \"root-account-create-update-5zwrz\" (UID: \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\") " pod="openstack/root-account-create-update-5zwrz" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.287252 4676 generic.go:334] "Generic (PLEG): container finished" podID="925957f4-9ca1-4e46-b3ac-0dd0ca5713cd" containerID="a55ff63829260fcafaf5bd727871713740d9ed36c94c4fc218cf31c8bb4fa53b" exitCode=0 Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.288518 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df4276b-01b4-403f-b040-85d5a8a9ef03" path="/var/lib/kubelet/pods/3df4276b-01b4-403f-b040-85d5a8a9ef03/volumes" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.289112 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ph986" event={"ID":"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd","Type":"ContainerDied","Data":"a55ff63829260fcafaf5bd727871713740d9ed36c94c4fc218cf31c8bb4fa53b"} Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.292865 4676 generic.go:334] "Generic (PLEG): container finished" podID="98c25d86-7dc6-4b15-9511-3a7cbf4c7592" containerID="8280acdb35983168359f77fef7c062161338aa47a5bf092b620c75a0dd83736f" exitCode=0 Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.293001 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d7f6-account-create-update-brcr8" event={"ID":"98c25d86-7dc6-4b15-9511-3a7cbf4c7592","Type":"ContainerDied","Data":"8280acdb35983168359f77fef7c062161338aa47a5bf092b620c75a0dd83736f"} Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.366827 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320759e3-3504-4bd4-b55a-b8cb664e1ce0-operator-scripts\") pod \"root-account-create-update-5zwrz\" (UID: \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\") " pod="openstack/root-account-create-update-5zwrz" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.366870 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzzm\" (UniqueName: \"kubernetes.io/projected/320759e3-3504-4bd4-b55a-b8cb664e1ce0-kube-api-access-krzzm\") pod \"root-account-create-update-5zwrz\" (UID: \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\") " pod="openstack/root-account-create-update-5zwrz" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.368070 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320759e3-3504-4bd4-b55a-b8cb664e1ce0-operator-scripts\") pod \"root-account-create-update-5zwrz\" (UID: \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\") " pod="openstack/root-account-create-update-5zwrz" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.390203 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzzm\" (UniqueName: \"kubernetes.io/projected/320759e3-3504-4bd4-b55a-b8cb664e1ce0-kube-api-access-krzzm\") pod \"root-account-create-update-5zwrz\" (UID: \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\") " pod="openstack/root-account-create-update-5zwrz" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.421976 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zwrz" Jan 24 00:20:56 crc kubenswrapper[4676]: I0124 00:20:56.859446 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5zwrz"] Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.308254 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zwrz" event={"ID":"320759e3-3504-4bd4-b55a-b8cb664e1ce0","Type":"ContainerStarted","Data":"d2ed0ffdbd6bb9c785556f969ad9aeabbea68ca6c96b8f79735c762793724bfe"} Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.308305 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zwrz" event={"ID":"320759e3-3504-4bd4-b55a-b8cb664e1ce0","Type":"ContainerStarted","Data":"a1c7b5b20a313f8f253066a409ba7019da788bdf5e0a133113d60074a31eb7cf"} Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.332914 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-5zwrz" podStartSLOduration=1.332893149 podStartE2EDuration="1.332893149s" podCreationTimestamp="2026-01-24 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:20:57.324594215 +0000 UTC m=+1041.354565226" watchObservedRunningTime="2026-01-24 00:20:57.332893149 +0000 UTC m=+1041.362864150" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.703124 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ph986" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.707977 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.790006 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssl97\" (UniqueName: \"kubernetes.io/projected/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-kube-api-access-ssl97\") pod \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\" (UID: \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\") " Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.790094 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-operator-scripts\") pod \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\" (UID: \"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd\") " Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.790137 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-operator-scripts\") pod \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\" (UID: \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\") " Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.790268 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md5qp\" (UniqueName: \"kubernetes.io/projected/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-kube-api-access-md5qp\") pod \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\" (UID: \"98c25d86-7dc6-4b15-9511-3a7cbf4c7592\") " Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.790670 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "925957f4-9ca1-4e46-b3ac-0dd0ca5713cd" (UID: "925957f4-9ca1-4e46-b3ac-0dd0ca5713cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.790746 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98c25d86-7dc6-4b15-9511-3a7cbf4c7592" (UID: "98c25d86-7dc6-4b15-9511-3a7cbf4c7592"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.791549 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.791575 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.795864 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-kube-api-access-md5qp" (OuterVolumeSpecName: "kube-api-access-md5qp") pod "98c25d86-7dc6-4b15-9511-3a7cbf4c7592" (UID: "98c25d86-7dc6-4b15-9511-3a7cbf4c7592"). InnerVolumeSpecName "kube-api-access-md5qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.796084 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-kube-api-access-ssl97" (OuterVolumeSpecName: "kube-api-access-ssl97") pod "925957f4-9ca1-4e46-b3ac-0dd0ca5713cd" (UID: "925957f4-9ca1-4e46-b3ac-0dd0ca5713cd"). InnerVolumeSpecName "kube-api-access-ssl97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.892494 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md5qp\" (UniqueName: \"kubernetes.io/projected/98c25d86-7dc6-4b15-9511-3a7cbf4c7592-kube-api-access-md5qp\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.892519 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssl97\" (UniqueName: \"kubernetes.io/projected/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd-kube-api-access-ssl97\") on node \"crc\" DevicePath \"\"" Jan 24 00:20:57 crc kubenswrapper[4676]: I0124 00:20:57.994538 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:20:57 crc kubenswrapper[4676]: E0124 00:20:57.994736 4676 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 00:20:57 crc kubenswrapper[4676]: E0124 00:20:57.994762 4676 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 00:20:57 crc kubenswrapper[4676]: E0124 00:20:57.994820 4676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift podName:4620e725-9218-461b-a56d-104bcb7f1df4 nodeName:}" failed. No retries permitted until 2026-01-24 00:21:13.994803028 +0000 UTC m=+1058.024774029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift") pod "swift-storage-0" (UID: "4620e725-9218-461b-a56d-104bcb7f1df4") : configmap "swift-ring-files" not found Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.336243 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ph986" event={"ID":"925957f4-9ca1-4e46-b3ac-0dd0ca5713cd","Type":"ContainerDied","Data":"00a687de5e1dc9cd4413233520312cb969bf584cd8c200af2ba5dc7b9782e4f4"} Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.336288 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a687de5e1dc9cd4413233520312cb969bf584cd8c200af2ba5dc7b9782e4f4" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.336369 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ph986" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.340956 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d7f6-account-create-update-brcr8" event={"ID":"98c25d86-7dc6-4b15-9511-3a7cbf4c7592","Type":"ContainerDied","Data":"520e95ec58419aeecc535479889b038fb2987a491e8f1735dc7b52a21a89b798"} Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.340996 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="520e95ec58419aeecc535479889b038fb2987a491e8f1735dc7b52a21a89b798" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.341097 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d7f6-account-create-update-brcr8" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.347354 4676 generic.go:334] "Generic (PLEG): container finished" podID="320759e3-3504-4bd4-b55a-b8cb664e1ce0" containerID="d2ed0ffdbd6bb9c785556f969ad9aeabbea68ca6c96b8f79735c762793724bfe" exitCode=0 Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.347465 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zwrz" event={"ID":"320759e3-3504-4bd4-b55a-b8cb664e1ce0","Type":"ContainerDied","Data":"d2ed0ffdbd6bb9c785556f969ad9aeabbea68ca6c96b8f79735c762793724bfe"} Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.422290 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-x4ptm"] Jan 24 00:20:58 crc kubenswrapper[4676]: E0124 00:20:58.422607 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c25d86-7dc6-4b15-9511-3a7cbf4c7592" containerName="mariadb-account-create-update" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.422619 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c25d86-7dc6-4b15-9511-3a7cbf4c7592" containerName="mariadb-account-create-update" Jan 24 00:20:58 crc kubenswrapper[4676]: E0124 00:20:58.422647 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925957f4-9ca1-4e46-b3ac-0dd0ca5713cd" containerName="mariadb-database-create" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.422654 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="925957f4-9ca1-4e46-b3ac-0dd0ca5713cd" containerName="mariadb-database-create" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.422792 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c25d86-7dc6-4b15-9511-3a7cbf4c7592" containerName="mariadb-account-create-update" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.422810 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="925957f4-9ca1-4e46-b3ac-0dd0ca5713cd" containerName="mariadb-database-create" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.423289 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x4ptm" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.435912 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-x4ptm"] Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.507166 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgccq\" (UniqueName: \"kubernetes.io/projected/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-kube-api-access-zgccq\") pod \"keystone-db-create-x4ptm\" (UID: \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\") " pod="openstack/keystone-db-create-x4ptm" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.507221 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-operator-scripts\") pod \"keystone-db-create-x4ptm\" (UID: \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\") " pod="openstack/keystone-db-create-x4ptm" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.534831 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-60c3-account-create-update-xz4kb"] Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.535703 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.537433 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.542243 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-60c3-account-create-update-xz4kb"] Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.609309 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-operator-scripts\") pod \"keystone-db-create-x4ptm\" (UID: \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\") " pod="openstack/keystone-db-create-x4ptm" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.609631 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-operator-scripts\") pod \"keystone-60c3-account-create-update-xz4kb\" (UID: \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\") " pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.609650 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xkg\" (UniqueName: \"kubernetes.io/projected/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-kube-api-access-z2xkg\") pod \"keystone-60c3-account-create-update-xz4kb\" (UID: \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\") " pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.609717 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgccq\" (UniqueName: \"kubernetes.io/projected/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-kube-api-access-zgccq\") pod \"keystone-db-create-x4ptm\" (UID: \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\") " pod="openstack/keystone-db-create-x4ptm" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.610428 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-operator-scripts\") pod \"keystone-db-create-x4ptm\" (UID: \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\") " pod="openstack/keystone-db-create-x4ptm" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.633323 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgccq\" (UniqueName: \"kubernetes.io/projected/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-kube-api-access-zgccq\") pod \"keystone-db-create-x4ptm\" (UID: \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\") " pod="openstack/keystone-db-create-x4ptm" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.711132 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-operator-scripts\") pod \"keystone-60c3-account-create-update-xz4kb\" (UID: \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\") " pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.711395 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xkg\" (UniqueName: \"kubernetes.io/projected/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-kube-api-access-z2xkg\") pod \"keystone-60c3-account-create-update-xz4kb\" (UID: \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\") " pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.711817 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-operator-scripts\") pod \"keystone-60c3-account-create-update-xz4kb\" (UID: \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\") " pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.736954 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xkg\" (UniqueName: \"kubernetes.io/projected/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-kube-api-access-z2xkg\") pod \"keystone-60c3-account-create-update-xz4kb\" (UID: \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\") " pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.737959 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x4ptm" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.782949 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-prlkx"] Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.783877 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-prlkx" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.793597 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-prlkx"] Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.851430 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.920174 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-operator-scripts\") pod \"placement-db-create-prlkx\" (UID: \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\") " pod="openstack/placement-db-create-prlkx" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.920513 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf6r7\" (UniqueName: \"kubernetes.io/projected/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-kube-api-access-tf6r7\") pod \"placement-db-create-prlkx\" (UID: \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\") " pod="openstack/placement-db-create-prlkx" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.921464 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-278e-account-create-update-8fwkd"] Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.922538 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.929756 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 24 00:20:58 crc kubenswrapper[4676]: I0124 00:20:58.941915 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-278e-account-create-update-8fwkd"] Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.022760 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf6r7\" (UniqueName: \"kubernetes.io/projected/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-kube-api-access-tf6r7\") pod \"placement-db-create-prlkx\" (UID: \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\") " pod="openstack/placement-db-create-prlkx" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.022828 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439dce27-2a59-49a2-beab-82482c1ce3cb-operator-scripts\") pod \"placement-278e-account-create-update-8fwkd\" (UID: \"439dce27-2a59-49a2-beab-82482c1ce3cb\") " pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.022871 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-operator-scripts\") pod \"placement-db-create-prlkx\" (UID: \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\") " pod="openstack/placement-db-create-prlkx" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.022923 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bt4p\" (UniqueName: \"kubernetes.io/projected/439dce27-2a59-49a2-beab-82482c1ce3cb-kube-api-access-9bt4p\") pod \"placement-278e-account-create-update-8fwkd\" (UID: \"439dce27-2a59-49a2-beab-82482c1ce3cb\") " pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.025971 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-operator-scripts\") pod \"placement-db-create-prlkx\" (UID: \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\") " pod="openstack/placement-db-create-prlkx" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.046089 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf6r7\" (UniqueName: \"kubernetes.io/projected/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-kube-api-access-tf6r7\") pod \"placement-db-create-prlkx\" (UID: \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\") " pod="openstack/placement-db-create-prlkx" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.049011 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.125463 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bt4p\" (UniqueName: \"kubernetes.io/projected/439dce27-2a59-49a2-beab-82482c1ce3cb-kube-api-access-9bt4p\") pod \"placement-278e-account-create-update-8fwkd\" (UID: \"439dce27-2a59-49a2-beab-82482c1ce3cb\") " pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.125642 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439dce27-2a59-49a2-beab-82482c1ce3cb-operator-scripts\") pod \"placement-278e-account-create-update-8fwkd\" (UID: \"439dce27-2a59-49a2-beab-82482c1ce3cb\") " pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.126478 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439dce27-2a59-49a2-beab-82482c1ce3cb-operator-scripts\") pod \"placement-278e-account-create-update-8fwkd\" (UID: \"439dce27-2a59-49a2-beab-82482c1ce3cb\") " pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.147838 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bt4p\" (UniqueName: \"kubernetes.io/projected/439dce27-2a59-49a2-beab-82482c1ce3cb-kube-api-access-9bt4p\") pod \"placement-278e-account-create-update-8fwkd\" (UID: \"439dce27-2a59-49a2-beab-82482c1ce3cb\") " pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.148805 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-prlkx" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.275829 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.278125 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-x4ptm"] Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.356682 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kbtsc"] Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.357652 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.360340 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.360557 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s5zsb" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.377915 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x4ptm" event={"ID":"d6d8987f-6f37-4773-99ad-6bc7fd8971f2","Type":"ContainerStarted","Data":"5e11825261a30b8eb14d5f7d2fc6206facb16e2daf33a5873c566c87690bb5b5"} Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.379238 4676 generic.go:334] "Generic (PLEG): container finished" podID="4ff61f48-e451-47e8-adcc-0870b29d28a9" containerID="a2edcf50a838124e8087fc219bf5e4c2e5444fe834157a78116049634218ec2a" exitCode=0 Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.379396 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5fmzb" event={"ID":"4ff61f48-e451-47e8-adcc-0870b29d28a9","Type":"ContainerDied","Data":"a2edcf50a838124e8087fc219bf5e4c2e5444fe834157a78116049634218ec2a"} Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.379477 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kbtsc"] Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.432046 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-combined-ca-bundle\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.432133 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6qmf\" (UniqueName: \"kubernetes.io/projected/7ebdc261-7c0d-49a2-9233-868fda906788-kube-api-access-t6qmf\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.432177 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-db-sync-config-data\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.432215 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-config-data\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.442490 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-60c3-account-create-update-xz4kb"] Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.538146 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6qmf\" (UniqueName: \"kubernetes.io/projected/7ebdc261-7c0d-49a2-9233-868fda906788-kube-api-access-t6qmf\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.538220 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-db-sync-config-data\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.538263 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-config-data\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.538321 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-combined-ca-bundle\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.544680 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-db-sync-config-data\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.545690 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-config-data\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.546938 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-combined-ca-bundle\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.567068 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6qmf\" (UniqueName: \"kubernetes.io/projected/7ebdc261-7c0d-49a2-9233-868fda906788-kube-api-access-t6qmf\") pod \"glance-db-sync-kbtsc\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.691038 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-prlkx"] Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.747550 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kbtsc" Jan 24 00:20:59 crc kubenswrapper[4676]: I0124 00:20:59.845303 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-278e-account-create-update-8fwkd"] Jan 24 00:20:59 crc kubenswrapper[4676]: W0124 00:20:59.909405 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod439dce27_2a59_49a2_beab_82482c1ce3cb.slice/crio-db36ca4b04238f99f0a64b033baa2442b0b23d7197b6ba5c71dd6b5dc07b0500 WatchSource:0}: Error finding container db36ca4b04238f99f0a64b033baa2442b0b23d7197b6ba5c71dd6b5dc07b0500: Status 404 returned error can't find the container with id db36ca4b04238f99f0a64b033baa2442b0b23d7197b6ba5c71dd6b5dc07b0500 Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.092930 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zwrz" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.152909 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320759e3-3504-4bd4-b55a-b8cb664e1ce0-operator-scripts\") pod \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\" (UID: \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.153059 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krzzm\" (UniqueName: \"kubernetes.io/projected/320759e3-3504-4bd4-b55a-b8cb664e1ce0-kube-api-access-krzzm\") pod \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\" (UID: \"320759e3-3504-4bd4-b55a-b8cb664e1ce0\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.155189 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320759e3-3504-4bd4-b55a-b8cb664e1ce0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "320759e3-3504-4bd4-b55a-b8cb664e1ce0" (UID: "320759e3-3504-4bd4-b55a-b8cb664e1ce0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.195610 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320759e3-3504-4bd4-b55a-b8cb664e1ce0-kube-api-access-krzzm" (OuterVolumeSpecName: "kube-api-access-krzzm") pod "320759e3-3504-4bd4-b55a-b8cb664e1ce0" (UID: "320759e3-3504-4bd4-b55a-b8cb664e1ce0"). InnerVolumeSpecName "kube-api-access-krzzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.258466 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krzzm\" (UniqueName: \"kubernetes.io/projected/320759e3-3504-4bd4-b55a-b8cb664e1ce0-kube-api-access-krzzm\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.258494 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/320759e3-3504-4bd4-b55a-b8cb664e1ce0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.310817 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kbtsc"] Jan 24 00:21:00 crc kubenswrapper[4676]: W0124 00:21:00.321675 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ebdc261_7c0d_49a2_9233_868fda906788.slice/crio-e532ae412cda92f465aa9e050762bb063a13c20d49e7ab63fe45cd1ee4ddc01b WatchSource:0}: Error finding container e532ae412cda92f465aa9e050762bb063a13c20d49e7ab63fe45cd1ee4ddc01b: Status 404 returned error can't find the container with id e532ae412cda92f465aa9e050762bb063a13c20d49e7ab63fe45cd1ee4ddc01b Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.392387 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kbtsc" event={"ID":"7ebdc261-7c0d-49a2-9233-868fda906788","Type":"ContainerStarted","Data":"e532ae412cda92f465aa9e050762bb063a13c20d49e7ab63fe45cd1ee4ddc01b"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.393734 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-278e-account-create-update-8fwkd" event={"ID":"439dce27-2a59-49a2-beab-82482c1ce3cb","Type":"ContainerStarted","Data":"b1e8f656f78cf6d79b93f73d693ca8f011b12dbd49906ddf8afe723c5036938e"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.393772 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-278e-account-create-update-8fwkd" event={"ID":"439dce27-2a59-49a2-beab-82482c1ce3cb","Type":"ContainerStarted","Data":"db36ca4b04238f99f0a64b033baa2442b0b23d7197b6ba5c71dd6b5dc07b0500"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.395288 4676 generic.go:334] "Generic (PLEG): container finished" podID="4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42" containerID="7b1df20731b13a3b81d6ecd68e39c0140b372b18913afe443cbe4c6b00f28ea4" exitCode=0 Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.395445 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-60c3-account-create-update-xz4kb" event={"ID":"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42","Type":"ContainerDied","Data":"7b1df20731b13a3b81d6ecd68e39c0140b372b18913afe443cbe4c6b00f28ea4"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.395478 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-60c3-account-create-update-xz4kb" event={"ID":"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42","Type":"ContainerStarted","Data":"10f1971dd827090a805df3cf92e6f04a60f7b6bf074806ac40846fa7212ad001"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.398319 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zwrz" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.398322 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zwrz" event={"ID":"320759e3-3504-4bd4-b55a-b8cb664e1ce0","Type":"ContainerDied","Data":"a1c7b5b20a313f8f253066a409ba7019da788bdf5e0a133113d60074a31eb7cf"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.398582 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c7b5b20a313f8f253066a409ba7019da788bdf5e0a133113d60074a31eb7cf" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.399753 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-prlkx" event={"ID":"2374740d-d0b0-4a0c-b23e-d41d3ad524ac","Type":"ContainerStarted","Data":"d5c280bb67bfb03306230881277a4f3cf73d70180a6643cdcdc6c29c26f5cdcb"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.399779 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-prlkx" event={"ID":"2374740d-d0b0-4a0c-b23e-d41d3ad524ac","Type":"ContainerStarted","Data":"2431e513e27795629ea214c200cf83d2b16cbdbd751217fe08aac5b0be029497"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.401581 4676 generic.go:334] "Generic (PLEG): container finished" podID="d6d8987f-6f37-4773-99ad-6bc7fd8971f2" containerID="3555dab39dcc24cb3d738eb810520147594f4722ea764e376bb893954691a29d" exitCode=0 Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.401650 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x4ptm" event={"ID":"d6d8987f-6f37-4773-99ad-6bc7fd8971f2","Type":"ContainerDied","Data":"3555dab39dcc24cb3d738eb810520147594f4722ea764e376bb893954691a29d"} Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.415635 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-278e-account-create-update-8fwkd" podStartSLOduration=2.415616728 podStartE2EDuration="2.415616728s" podCreationTimestamp="2026-01-24 00:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:00.410609635 +0000 UTC m=+1044.440580636" watchObservedRunningTime="2026-01-24 00:21:00.415616728 +0000 UTC m=+1044.445587729" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.776360 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.797512 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-prlkx" podStartSLOduration=2.797497223 podStartE2EDuration="2.797497223s" podCreationTimestamp="2026-01-24 00:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:00.490614646 +0000 UTC m=+1044.520585647" watchObservedRunningTime="2026-01-24 00:21:00.797497223 +0000 UTC m=+1044.827468214" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.869267 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-ring-data-devices\") pod \"4ff61f48-e451-47e8-adcc-0870b29d28a9\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.869312 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zd5k\" (UniqueName: \"kubernetes.io/projected/4ff61f48-e451-47e8-adcc-0870b29d28a9-kube-api-access-2zd5k\") pod \"4ff61f48-e451-47e8-adcc-0870b29d28a9\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.869345 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-swiftconf\") pod \"4ff61f48-e451-47e8-adcc-0870b29d28a9\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.869407 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-scripts\") pod \"4ff61f48-e451-47e8-adcc-0870b29d28a9\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.869468 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-dispersionconf\") pod \"4ff61f48-e451-47e8-adcc-0870b29d28a9\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.869492 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-combined-ca-bundle\") pod \"4ff61f48-e451-47e8-adcc-0870b29d28a9\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.869652 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ff61f48-e451-47e8-adcc-0870b29d28a9-etc-swift\") pod \"4ff61f48-e451-47e8-adcc-0870b29d28a9\" (UID: \"4ff61f48-e451-47e8-adcc-0870b29d28a9\") " Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.870967 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4ff61f48-e451-47e8-adcc-0870b29d28a9" (UID: "4ff61f48-e451-47e8-adcc-0870b29d28a9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.871359 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff61f48-e451-47e8-adcc-0870b29d28a9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4ff61f48-e451-47e8-adcc-0870b29d28a9" (UID: "4ff61f48-e451-47e8-adcc-0870b29d28a9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.874284 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff61f48-e451-47e8-adcc-0870b29d28a9-kube-api-access-2zd5k" (OuterVolumeSpecName: "kube-api-access-2zd5k") pod "4ff61f48-e451-47e8-adcc-0870b29d28a9" (UID: "4ff61f48-e451-47e8-adcc-0870b29d28a9"). InnerVolumeSpecName "kube-api-access-2zd5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.881369 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4ff61f48-e451-47e8-adcc-0870b29d28a9" (UID: "4ff61f48-e451-47e8-adcc-0870b29d28a9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.898537 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-scripts" (OuterVolumeSpecName: "scripts") pod "4ff61f48-e451-47e8-adcc-0870b29d28a9" (UID: "4ff61f48-e451-47e8-adcc-0870b29d28a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.898757 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ff61f48-e451-47e8-adcc-0870b29d28a9" (UID: "4ff61f48-e451-47e8-adcc-0870b29d28a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.900185 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4ff61f48-e451-47e8-adcc-0870b29d28a9" (UID: "4ff61f48-e451-47e8-adcc-0870b29d28a9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.971616 4676 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4ff61f48-e451-47e8-adcc-0870b29d28a9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.971648 4676 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.971660 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zd5k\" (UniqueName: \"kubernetes.io/projected/4ff61f48-e451-47e8-adcc-0870b29d28a9-kube-api-access-2zd5k\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.971669 4676 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.971677 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff61f48-e451-47e8-adcc-0870b29d28a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.971685 4676 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:00 crc kubenswrapper[4676]: I0124 00:21:00.971694 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff61f48-e451-47e8-adcc-0870b29d28a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.410915 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fmzb" Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.410956 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5fmzb" event={"ID":"4ff61f48-e451-47e8-adcc-0870b29d28a9","Type":"ContainerDied","Data":"edae868ed3f37d7cd803c42347470b3ff0f9857923570cef0ccdd12a7ade323b"} Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.410992 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edae868ed3f37d7cd803c42347470b3ff0f9857923570cef0ccdd12a7ade323b" Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.412568 4676 generic.go:334] "Generic (PLEG): container finished" podID="439dce27-2a59-49a2-beab-82482c1ce3cb" containerID="b1e8f656f78cf6d79b93f73d693ca8f011b12dbd49906ddf8afe723c5036938e" exitCode=0 Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.412631 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-278e-account-create-update-8fwkd" event={"ID":"439dce27-2a59-49a2-beab-82482c1ce3cb","Type":"ContainerDied","Data":"b1e8f656f78cf6d79b93f73d693ca8f011b12dbd49906ddf8afe723c5036938e"} Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.415048 4676 generic.go:334] "Generic (PLEG): container finished" podID="2374740d-d0b0-4a0c-b23e-d41d3ad524ac" containerID="d5c280bb67bfb03306230881277a4f3cf73d70180a6643cdcdc6c29c26f5cdcb" exitCode=0 Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.415284 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-prlkx" event={"ID":"2374740d-d0b0-4a0c-b23e-d41d3ad524ac","Type":"ContainerDied","Data":"d5c280bb67bfb03306230881277a4f3cf73d70180a6643cdcdc6c29c26f5cdcb"} Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.891291 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.895757 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x4ptm" Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.988982 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgccq\" (UniqueName: \"kubernetes.io/projected/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-kube-api-access-zgccq\") pod \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\" (UID: \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\") " Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.989028 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-operator-scripts\") pod \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\" (UID: \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\") " Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.989087 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-operator-scripts\") pod \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\" (UID: \"d6d8987f-6f37-4773-99ad-6bc7fd8971f2\") " Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.989166 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2xkg\" (UniqueName: \"kubernetes.io/projected/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-kube-api-access-z2xkg\") pod \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\" (UID: \"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42\") " Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.990601 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6d8987f-6f37-4773-99ad-6bc7fd8971f2" (UID: "d6d8987f-6f37-4773-99ad-6bc7fd8971f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.992598 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42" (UID: "4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.995168 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-kube-api-access-zgccq" (OuterVolumeSpecName: "kube-api-access-zgccq") pod "d6d8987f-6f37-4773-99ad-6bc7fd8971f2" (UID: "d6d8987f-6f37-4773-99ad-6bc7fd8971f2"). InnerVolumeSpecName "kube-api-access-zgccq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:01 crc kubenswrapper[4676]: I0124 00:21:01.996072 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-kube-api-access-z2xkg" (OuterVolumeSpecName: "kube-api-access-z2xkg") pod "4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42" (UID: "4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42"). InnerVolumeSpecName "kube-api-access-z2xkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.091021 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgccq\" (UniqueName: \"kubernetes.io/projected/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-kube-api-access-zgccq\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.091053 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.091067 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8987f-6f37-4773-99ad-6bc7fd8971f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.091078 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2xkg\" (UniqueName: \"kubernetes.io/projected/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42-kube-api-access-z2xkg\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.245323 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5zwrz"] Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.252118 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5zwrz"] Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.273446 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320759e3-3504-4bd4-b55a-b8cb664e1ce0" path="/var/lib/kubelet/pods/320759e3-3504-4bd4-b55a-b8cb664e1ce0/volumes" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.425049 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x4ptm" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.425081 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x4ptm" event={"ID":"d6d8987f-6f37-4773-99ad-6bc7fd8971f2","Type":"ContainerDied","Data":"5e11825261a30b8eb14d5f7d2fc6206facb16e2daf33a5873c566c87690bb5b5"} Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.425118 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e11825261a30b8eb14d5f7d2fc6206facb16e2daf33a5873c566c87690bb5b5" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.430320 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-60c3-account-create-update-xz4kb" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.430411 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-60c3-account-create-update-xz4kb" event={"ID":"4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42","Type":"ContainerDied","Data":"10f1971dd827090a805df3cf92e6f04a60f7b6bf074806ac40846fa7212ad001"} Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.430457 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10f1971dd827090a805df3cf92e6f04a60f7b6bf074806ac40846fa7212ad001" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.766606 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-prlkx" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.903244 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf6r7\" (UniqueName: \"kubernetes.io/projected/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-kube-api-access-tf6r7\") pod \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\" (UID: \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\") " Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.903343 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-operator-scripts\") pod \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\" (UID: \"2374740d-d0b0-4a0c-b23e-d41d3ad524ac\") " Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.904094 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2374740d-d0b0-4a0c-b23e-d41d3ad524ac" (UID: "2374740d-d0b0-4a0c-b23e-d41d3ad524ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.928020 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-kube-api-access-tf6r7" (OuterVolumeSpecName: "kube-api-access-tf6r7") pod "2374740d-d0b0-4a0c-b23e-d41d3ad524ac" (UID: "2374740d-d0b0-4a0c-b23e-d41d3ad524ac"). InnerVolumeSpecName "kube-api-access-tf6r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:02 crc kubenswrapper[4676]: I0124 00:21:02.966726 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.004890 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf6r7\" (UniqueName: \"kubernetes.io/projected/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-kube-api-access-tf6r7\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.004919 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2374740d-d0b0-4a0c-b23e-d41d3ad524ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.105752 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bt4p\" (UniqueName: \"kubernetes.io/projected/439dce27-2a59-49a2-beab-82482c1ce3cb-kube-api-access-9bt4p\") pod \"439dce27-2a59-49a2-beab-82482c1ce3cb\" (UID: \"439dce27-2a59-49a2-beab-82482c1ce3cb\") " Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.105881 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439dce27-2a59-49a2-beab-82482c1ce3cb-operator-scripts\") pod \"439dce27-2a59-49a2-beab-82482c1ce3cb\" (UID: \"439dce27-2a59-49a2-beab-82482c1ce3cb\") " Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.106692 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439dce27-2a59-49a2-beab-82482c1ce3cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "439dce27-2a59-49a2-beab-82482c1ce3cb" (UID: "439dce27-2a59-49a2-beab-82482c1ce3cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.106887 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439dce27-2a59-49a2-beab-82482c1ce3cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.130797 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439dce27-2a59-49a2-beab-82482c1ce3cb-kube-api-access-9bt4p" (OuterVolumeSpecName: "kube-api-access-9bt4p") pod "439dce27-2a59-49a2-beab-82482c1ce3cb" (UID: "439dce27-2a59-49a2-beab-82482c1ce3cb"). InnerVolumeSpecName "kube-api-access-9bt4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.208960 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bt4p\" (UniqueName: \"kubernetes.io/projected/439dce27-2a59-49a2-beab-82482c1ce3cb-kube-api-access-9bt4p\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.439264 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-prlkx" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.439260 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-prlkx" event={"ID":"2374740d-d0b0-4a0c-b23e-d41d3ad524ac","Type":"ContainerDied","Data":"2431e513e27795629ea214c200cf83d2b16cbdbd751217fe08aac5b0be029497"} Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.439604 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2431e513e27795629ea214c200cf83d2b16cbdbd751217fe08aac5b0be029497" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.440615 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-278e-account-create-update-8fwkd" event={"ID":"439dce27-2a59-49a2-beab-82482c1ce3cb","Type":"ContainerDied","Data":"db36ca4b04238f99f0a64b033baa2442b0b23d7197b6ba5c71dd6b5dc07b0500"} Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.440636 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db36ca4b04238f99f0a64b033baa2442b0b23d7197b6ba5c71dd6b5dc07b0500" Jan 24 00:21:03 crc kubenswrapper[4676]: I0124 00:21:03.440683 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-278e-account-create-update-8fwkd" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.256801 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-drjg7"] Jan 24 00:21:07 crc kubenswrapper[4676]: E0124 00:21:07.257323 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320759e3-3504-4bd4-b55a-b8cb664e1ce0" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257334 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="320759e3-3504-4bd4-b55a-b8cb664e1ce0" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: E0124 00:21:07.257349 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439dce27-2a59-49a2-beab-82482c1ce3cb" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257355 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="439dce27-2a59-49a2-beab-82482c1ce3cb" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: E0124 00:21:07.257368 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff61f48-e451-47e8-adcc-0870b29d28a9" containerName="swift-ring-rebalance" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257388 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff61f48-e451-47e8-adcc-0870b29d28a9" containerName="swift-ring-rebalance" Jan 24 00:21:07 crc kubenswrapper[4676]: E0124 00:21:07.257401 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2374740d-d0b0-4a0c-b23e-d41d3ad524ac" containerName="mariadb-database-create" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257407 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2374740d-d0b0-4a0c-b23e-d41d3ad524ac" containerName="mariadb-database-create" Jan 24 00:21:07 crc kubenswrapper[4676]: E0124 00:21:07.257426 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257433 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: E0124 00:21:07.257443 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6d8987f-6f37-4773-99ad-6bc7fd8971f2" containerName="mariadb-database-create" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257449 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d8987f-6f37-4773-99ad-6bc7fd8971f2" containerName="mariadb-database-create" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257580 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257592 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="320759e3-3504-4bd4-b55a-b8cb664e1ce0" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257601 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff61f48-e451-47e8-adcc-0870b29d28a9" containerName="swift-ring-rebalance" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257609 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6d8987f-6f37-4773-99ad-6bc7fd8971f2" containerName="mariadb-database-create" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257619 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="439dce27-2a59-49a2-beab-82482c1ce3cb" containerName="mariadb-account-create-update" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.257625 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2374740d-d0b0-4a0c-b23e-d41d3ad524ac" containerName="mariadb-database-create" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.258148 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.260776 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.264534 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-drjg7"] Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.391298 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20421570-a66e-4903-937f-af2ab847b5f1-operator-scripts\") pod \"root-account-create-update-drjg7\" (UID: \"20421570-a66e-4903-937f-af2ab847b5f1\") " pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.392213 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7wkn\" (UniqueName: \"kubernetes.io/projected/20421570-a66e-4903-937f-af2ab847b5f1-kube-api-access-b7wkn\") pod \"root-account-create-update-drjg7\" (UID: \"20421570-a66e-4903-937f-af2ab847b5f1\") " pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.493496 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20421570-a66e-4903-937f-af2ab847b5f1-operator-scripts\") pod \"root-account-create-update-drjg7\" (UID: \"20421570-a66e-4903-937f-af2ab847b5f1\") " pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.493545 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7wkn\" (UniqueName: \"kubernetes.io/projected/20421570-a66e-4903-937f-af2ab847b5f1-kube-api-access-b7wkn\") pod \"root-account-create-update-drjg7\" (UID: \"20421570-a66e-4903-937f-af2ab847b5f1\") " pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.494149 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20421570-a66e-4903-937f-af2ab847b5f1-operator-scripts\") pod \"root-account-create-update-drjg7\" (UID: \"20421570-a66e-4903-937f-af2ab847b5f1\") " pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.528226 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7wkn\" (UniqueName: \"kubernetes.io/projected/20421570-a66e-4903-937f-af2ab847b5f1-kube-api-access-b7wkn\") pod \"root-account-create-update-drjg7\" (UID: \"20421570-a66e-4903-937f-af2ab847b5f1\") " pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:07 crc kubenswrapper[4676]: I0124 00:21:07.575224 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:08 crc kubenswrapper[4676]: I0124 00:21:08.480299 4676 generic.go:334] "Generic (PLEG): container finished" podID="36558de2-6aac-43e9-832d-2f96c46e8152" containerID="9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960" exitCode=0 Jan 24 00:21:08 crc kubenswrapper[4676]: I0124 00:21:08.480439 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36558de2-6aac-43e9-832d-2f96c46e8152","Type":"ContainerDied","Data":"9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960"} Jan 24 00:21:08 crc kubenswrapper[4676]: I0124 00:21:08.482455 4676 generic.go:334] "Generic (PLEG): container finished" podID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerID="0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723" exitCode=0 Jan 24 00:21:08 crc kubenswrapper[4676]: I0124 00:21:08.482502 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68d6466c-a6ff-40ba-952d-007b14efdfd3","Type":"ContainerDied","Data":"0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723"} Jan 24 00:21:10 crc kubenswrapper[4676]: I0124 00:21:10.380490 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jsd4n" podUID="7adbcf83-efbd-4e8d-97e5-f8768463284a" containerName="ovn-controller" probeResult="failure" output=< Jan 24 00:21:10 crc kubenswrapper[4676]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 00:21:10 crc kubenswrapper[4676]: > Jan 24 00:21:10 crc kubenswrapper[4676]: I0124 00:21:10.503784 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:21:10 crc kubenswrapper[4676]: I0124 00:21:10.504653 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6sl9q" Jan 24 00:21:10 crc kubenswrapper[4676]: I0124 00:21:10.942447 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jsd4n-config-lcgr8"] Jan 24 00:21:10 crc kubenswrapper[4676]: I0124 00:21:10.945521 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:10 crc kubenswrapper[4676]: I0124 00:21:10.947717 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 24 00:21:10 crc kubenswrapper[4676]: I0124 00:21:10.952041 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jsd4n-config-lcgr8"] Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.054903 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run-ovn\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.055046 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-log-ovn\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.055077 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8s74\" (UniqueName: \"kubernetes.io/projected/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-kube-api-access-w8s74\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.055535 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-scripts\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.055766 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-additional-scripts\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.055796 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.157764 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-additional-scripts\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158120 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158155 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run-ovn\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158243 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-log-ovn\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158271 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8s74\" (UniqueName: \"kubernetes.io/projected/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-kube-api-access-w8s74\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158337 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-scripts\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158390 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158445 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-additional-scripts\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158454 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-log-ovn\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.158492 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run-ovn\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.161021 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-scripts\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.196460 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8s74\" (UniqueName: \"kubernetes.io/projected/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-kube-api-access-w8s74\") pod \"ovn-controller-jsd4n-config-lcgr8\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:11 crc kubenswrapper[4676]: I0124 00:21:11.267771 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:14 crc kubenswrapper[4676]: I0124 00:21:14.004642 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:21:14 crc kubenswrapper[4676]: I0124 00:21:14.018098 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4620e725-9218-461b-a56d-104bcb7f1df4-etc-swift\") pod \"swift-storage-0\" (UID: \"4620e725-9218-461b-a56d-104bcb7f1df4\") " pod="openstack/swift-storage-0" Jan 24 00:21:14 crc kubenswrapper[4676]: I0124 00:21:14.151544 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 24 00:21:15 crc kubenswrapper[4676]: I0124 00:21:15.387192 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jsd4n" podUID="7adbcf83-efbd-4e8d-97e5-f8768463284a" containerName="ovn-controller" probeResult="failure" output=< Jan 24 00:21:15 crc kubenswrapper[4676]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 00:21:15 crc kubenswrapper[4676]: > Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.134417 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-drjg7"] Jan 24 00:21:16 crc kubenswrapper[4676]: W0124 00:21:16.146461 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20421570_a66e_4903_937f_af2ab847b5f1.slice/crio-6fe7e9ee134a3384cdb14a9c06f284d65f4ca15260ccd9bddf8bd062136f714f WatchSource:0}: Error finding container 6fe7e9ee134a3384cdb14a9c06f284d65f4ca15260ccd9bddf8bd062136f714f: Status 404 returned error can't find the container with id 6fe7e9ee134a3384cdb14a9c06f284d65f4ca15260ccd9bddf8bd062136f714f Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.242750 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jsd4n-config-lcgr8"] Jan 24 00:21:16 crc kubenswrapper[4676]: W0124 00:21:16.254847 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5533b3f_acbd_48c8_ad6f_96ba8ea7a435.slice/crio-6ab4ddbfca70e6160be9f44ee5e62ac3fe6c06d281ba30970058467edbdbafdb WatchSource:0}: Error finding container 6ab4ddbfca70e6160be9f44ee5e62ac3fe6c06d281ba30970058467edbdbafdb: Status 404 returned error can't find the container with id 6ab4ddbfca70e6160be9f44ee5e62ac3fe6c06d281ba30970058467edbdbafdb Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.360978 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.555235 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68d6466c-a6ff-40ba-952d-007b14efdfd3","Type":"ContainerStarted","Data":"7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385"} Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.555594 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.556985 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36558de2-6aac-43e9-832d-2f96c46e8152","Type":"ContainerStarted","Data":"028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9"} Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.557579 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.558823 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kbtsc" event={"ID":"7ebdc261-7c0d-49a2-9233-868fda906788","Type":"ContainerStarted","Data":"1c08827f5b622dc0d2e57d1709fbff0d00bb4f69592994ac4ce101a373fb04cb"} Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.559915 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsd4n-config-lcgr8" event={"ID":"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435","Type":"ContainerStarted","Data":"6ab4ddbfca70e6160be9f44ee5e62ac3fe6c06d281ba30970058467edbdbafdb"} Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.560702 4676 generic.go:334] "Generic (PLEG): container finished" podID="20421570-a66e-4903-937f-af2ab847b5f1" containerID="a94f61fa91f9d6c0d5e0b0e07bb624e5cb428e8002a06af7cee89d3dcb90c4cd" exitCode=0 Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.560739 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-drjg7" event={"ID":"20421570-a66e-4903-937f-af2ab847b5f1","Type":"ContainerDied","Data":"a94f61fa91f9d6c0d5e0b0e07bb624e5cb428e8002a06af7cee89d3dcb90c4cd"} Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.560753 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-drjg7" event={"ID":"20421570-a66e-4903-937f-af2ab847b5f1","Type":"ContainerStarted","Data":"6fe7e9ee134a3384cdb14a9c06f284d65f4ca15260ccd9bddf8bd062136f714f"} Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.561371 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"76813373ab91f4561b4c34edc63c14ee6c3985d8e49f0b8588af9a13cafe7bed"} Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.589936 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.023265926 podStartE2EDuration="1m22.589921505s" podCreationTimestamp="2026-01-24 00:19:54 +0000 UTC" firstStartedPulling="2026-01-24 00:19:56.385922389 +0000 UTC m=+980.415893390" lastFinishedPulling="2026-01-24 00:20:33.952577968 +0000 UTC m=+1017.982548969" observedRunningTime="2026-01-24 00:21:16.585721926 +0000 UTC m=+1060.615692927" watchObservedRunningTime="2026-01-24 00:21:16.589921505 +0000 UTC m=+1060.619892496" Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.615282 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kbtsc" podStartSLOduration=2.19904079 podStartE2EDuration="17.615264721s" podCreationTimestamp="2026-01-24 00:20:59 +0000 UTC" firstStartedPulling="2026-01-24 00:21:00.325442918 +0000 UTC m=+1044.355413919" lastFinishedPulling="2026-01-24 00:21:15.741666849 +0000 UTC m=+1059.771637850" observedRunningTime="2026-01-24 00:21:16.605745779 +0000 UTC m=+1060.635716780" watchObservedRunningTime="2026-01-24 00:21:16.615264721 +0000 UTC m=+1060.645235712" Jan 24 00:21:16 crc kubenswrapper[4676]: I0124 00:21:16.641156 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=45.838200991 podStartE2EDuration="1m22.641139204s" podCreationTimestamp="2026-01-24 00:19:54 +0000 UTC" firstStartedPulling="2026-01-24 00:19:56.637072479 +0000 UTC m=+980.667043480" lastFinishedPulling="2026-01-24 00:20:33.440010672 +0000 UTC m=+1017.469981693" observedRunningTime="2026-01-24 00:21:16.640527535 +0000 UTC m=+1060.670498536" watchObservedRunningTime="2026-01-24 00:21:16.641139204 +0000 UTC m=+1060.671110205" Jan 24 00:21:17 crc kubenswrapper[4676]: I0124 00:21:17.570667 4676 generic.go:334] "Generic (PLEG): container finished" podID="f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" containerID="f59e9569d48da0357305a5589f2cf90a01ccd73d4a805dd25787bb7bb6ebfba6" exitCode=0 Jan 24 00:21:17 crc kubenswrapper[4676]: I0124 00:21:17.570775 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsd4n-config-lcgr8" event={"ID":"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435","Type":"ContainerDied","Data":"f59e9569d48da0357305a5589f2cf90a01ccd73d4a805dd25787bb7bb6ebfba6"} Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.110592 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.180976 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20421570-a66e-4903-937f-af2ab847b5f1-operator-scripts\") pod \"20421570-a66e-4903-937f-af2ab847b5f1\" (UID: \"20421570-a66e-4903-937f-af2ab847b5f1\") " Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.181405 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7wkn\" (UniqueName: \"kubernetes.io/projected/20421570-a66e-4903-937f-af2ab847b5f1-kube-api-access-b7wkn\") pod \"20421570-a66e-4903-937f-af2ab847b5f1\" (UID: \"20421570-a66e-4903-937f-af2ab847b5f1\") " Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.181750 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20421570-a66e-4903-937f-af2ab847b5f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20421570-a66e-4903-937f-af2ab847b5f1" (UID: "20421570-a66e-4903-937f-af2ab847b5f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.188538 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20421570-a66e-4903-937f-af2ab847b5f1-kube-api-access-b7wkn" (OuterVolumeSpecName: "kube-api-access-b7wkn") pod "20421570-a66e-4903-937f-af2ab847b5f1" (UID: "20421570-a66e-4903-937f-af2ab847b5f1"). InnerVolumeSpecName "kube-api-access-b7wkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.283537 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20421570-a66e-4903-937f-af2ab847b5f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.283573 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7wkn\" (UniqueName: \"kubernetes.io/projected/20421570-a66e-4903-937f-af2ab847b5f1-kube-api-access-b7wkn\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.579938 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-drjg7" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.579933 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-drjg7" event={"ID":"20421570-a66e-4903-937f-af2ab847b5f1","Type":"ContainerDied","Data":"6fe7e9ee134a3384cdb14a9c06f284d65f4ca15260ccd9bddf8bd062136f714f"} Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.580066 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fe7e9ee134a3384cdb14a9c06f284d65f4ca15260ccd9bddf8bd062136f714f" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.582225 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"52b0c7a564ec2421ae93c6f06e2357b89add4953560b35b93460705f5e691400"} Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.582260 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"6792f08274ea491e2f47a7a92b9b8f41450cfcd3cd3fd728fc94129facdea2ae"} Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.582275 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"f8580f6d6428314292177ea4a66fb98598d83fc57cbb9a1593ab0fae2d6f4a0f"} Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.857467 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.889657 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-log-ovn\") pod \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.889709 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8s74\" (UniqueName: \"kubernetes.io/projected/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-kube-api-access-w8s74\") pod \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.889801 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run\") pod \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.889881 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-additional-scripts\") pod \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.889980 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-scripts\") pod \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.890007 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run-ovn\") pod \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\" (UID: \"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435\") " Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.890361 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" (UID: "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.890409 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" (UID: "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.891324 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run" (OuterVolumeSpecName: "var-run") pod "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" (UID: "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.891859 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" (UID: "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.892279 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-scripts" (OuterVolumeSpecName: "scripts") pod "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" (UID: "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.893590 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-kube-api-access-w8s74" (OuterVolumeSpecName: "kube-api-access-w8s74") pod "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" (UID: "f5533b3f-acbd-48c8-ad6f-96ba8ea7a435"). InnerVolumeSpecName "kube-api-access-w8s74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.991671 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.991704 4676 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.991717 4676 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.991730 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8s74\" (UniqueName: \"kubernetes.io/projected/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-kube-api-access-w8s74\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.991746 4676 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-var-run\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:18 crc kubenswrapper[4676]: I0124 00:21:18.991759 4676 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:19 crc kubenswrapper[4676]: I0124 00:21:19.591964 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jsd4n-config-lcgr8" Jan 24 00:21:19 crc kubenswrapper[4676]: I0124 00:21:19.591959 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jsd4n-config-lcgr8" event={"ID":"f5533b3f-acbd-48c8-ad6f-96ba8ea7a435","Type":"ContainerDied","Data":"6ab4ddbfca70e6160be9f44ee5e62ac3fe6c06d281ba30970058467edbdbafdb"} Jan 24 00:21:19 crc kubenswrapper[4676]: I0124 00:21:19.592298 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ab4ddbfca70e6160be9f44ee5e62ac3fe6c06d281ba30970058467edbdbafdb" Jan 24 00:21:19 crc kubenswrapper[4676]: I0124 00:21:19.595540 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"1edba0950f06bc40eef9983d691b1b32ddfeb63659c38025dfd483b9df5ccb22"} Jan 24 00:21:20 crc kubenswrapper[4676]: I0124 00:21:20.099283 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jsd4n-config-lcgr8"] Jan 24 00:21:20 crc kubenswrapper[4676]: I0124 00:21:20.112473 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jsd4n-config-lcgr8"] Jan 24 00:21:20 crc kubenswrapper[4676]: I0124 00:21:20.263605 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" path="/var/lib/kubelet/pods/f5533b3f-acbd-48c8-ad6f-96ba8ea7a435/volumes" Jan 24 00:21:20 crc kubenswrapper[4676]: I0124 00:21:20.371989 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jsd4n" Jan 24 00:21:21 crc kubenswrapper[4676]: I0124 00:21:21.609805 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"3aecc2f3ca1fbd1049f9006533fdf5d01f6bf942c5ee181d9c4658231ef5e7fc"} Jan 24 00:21:21 crc kubenswrapper[4676]: I0124 00:21:21.610078 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"150a2912253199f68a039847a72a77917d84e33ab2aefe72f6d106206c8e35f2"} Jan 24 00:21:22 crc kubenswrapper[4676]: I0124 00:21:22.623077 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"c9deb54ab3dcda13383c05b900b169d3fe544744cca71b059f30e3d3261057cb"} Jan 24 00:21:22 crc kubenswrapper[4676]: I0124 00:21:22.623414 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"b3e61ed8108d731569f941607d6be9445778ccdb545b6016a97194e0815a77bc"} Jan 24 00:21:23 crc kubenswrapper[4676]: I0124 00:21:23.659690 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"005807bdc57d3d5ebc8b98f4b159a6dad3b74286736b8b7a003461a08a73aecd"} Jan 24 00:21:23 crc kubenswrapper[4676]: I0124 00:21:23.659934 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"30bc488fc17539b0bc9a0c5425e8464ec0588d5aa449a80877b372000a22b666"} Jan 24 00:21:23 crc kubenswrapper[4676]: I0124 00:21:23.659944 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"696beba936985a72ca3611ac139646e8cca505fa8af18342f82ab73e4e8837e0"} Jan 24 00:21:23 crc kubenswrapper[4676]: I0124 00:21:23.659953 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"a01bdae7755588308544f3dcea427b17fe2699df1d983433d0384a78222ddb5f"} Jan 24 00:21:24 crc kubenswrapper[4676]: I0124 00:21:24.676796 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"79b8e0c78855d2388c4c173e7757e18b5c4325314488e65f7aa9ec8b93c76962"} Jan 24 00:21:24 crc kubenswrapper[4676]: I0124 00:21:24.677210 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"a18bffdbd875d6689e14104137855c1f19cd5172e09b69e38f77e3612a7ce891"} Jan 24 00:21:24 crc kubenswrapper[4676]: I0124 00:21:24.677228 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4620e725-9218-461b-a56d-104bcb7f1df4","Type":"ContainerStarted","Data":"4687615383f49d79f3e02c88f15e42314132212d09bb3a522db9cccd1d403781"} Jan 24 00:21:24 crc kubenswrapper[4676]: I0124 00:21:24.713780 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.074208936 podStartE2EDuration="44.713765816s" podCreationTimestamp="2026-01-24 00:20:40 +0000 UTC" firstStartedPulling="2026-01-24 00:21:16.34029028 +0000 UTC m=+1060.370261281" lastFinishedPulling="2026-01-24 00:21:22.97984716 +0000 UTC m=+1067.009818161" observedRunningTime="2026-01-24 00:21:24.710747163 +0000 UTC m=+1068.740718164" watchObservedRunningTime="2026-01-24 00:21:24.713765816 +0000 UTC m=+1068.743736817" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.043744 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kvxlz"] Jan 24 00:21:25 crc kubenswrapper[4676]: E0124 00:21:25.044236 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" containerName="ovn-config" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.044251 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" containerName="ovn-config" Jan 24 00:21:25 crc kubenswrapper[4676]: E0124 00:21:25.044282 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20421570-a66e-4903-937f-af2ab847b5f1" containerName="mariadb-account-create-update" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.044288 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="20421570-a66e-4903-937f-af2ab847b5f1" containerName="mariadb-account-create-update" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.044448 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="20421570-a66e-4903-937f-af2ab847b5f1" containerName="mariadb-account-create-update" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.044466 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5533b3f-acbd-48c8-ad6f-96ba8ea7a435" containerName="ovn-config" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.045181 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.047263 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.067122 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kvxlz"] Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.138473 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.138523 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.138543 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwv2\" (UniqueName: \"kubernetes.io/projected/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-kube-api-access-hxwv2\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.138577 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-config\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.138604 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.138650 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.239940 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.239986 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwv2\" (UniqueName: \"kubernetes.io/projected/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-kube-api-access-hxwv2\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.240024 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-config\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.240072 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.240857 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.241020 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-config\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.241041 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.240106 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.241177 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.241184 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.242173 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.257763 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwv2\" (UniqueName: \"kubernetes.io/projected/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-kube-api-access-hxwv2\") pod \"dnsmasq-dns-5c79d794d7-kvxlz\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.360484 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.803408 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 24 00:21:25 crc kubenswrapper[4676]: I0124 00:21:25.842790 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kvxlz"] Jan 24 00:21:26 crc kubenswrapper[4676]: I0124 00:21:26.129510 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:21:26 crc kubenswrapper[4676]: I0124 00:21:26.690938 4676 generic.go:334] "Generic (PLEG): container finished" podID="7ebdc261-7c0d-49a2-9233-868fda906788" containerID="1c08827f5b622dc0d2e57d1709fbff0d00bb4f69592994ac4ce101a373fb04cb" exitCode=0 Jan 24 00:21:26 crc kubenswrapper[4676]: I0124 00:21:26.691005 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kbtsc" event={"ID":"7ebdc261-7c0d-49a2-9233-868fda906788","Type":"ContainerDied","Data":"1c08827f5b622dc0d2e57d1709fbff0d00bb4f69592994ac4ce101a373fb04cb"} Jan 24 00:21:26 crc kubenswrapper[4676]: I0124 00:21:26.692624 4676 generic.go:334] "Generic (PLEG): container finished" podID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" containerID="4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e" exitCode=0 Jan 24 00:21:26 crc kubenswrapper[4676]: I0124 00:21:26.692658 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" event={"ID":"916cbdfc-24b9-47e7-80a6-bb5bb409ae51","Type":"ContainerDied","Data":"4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e"} Jan 24 00:21:26 crc kubenswrapper[4676]: I0124 00:21:26.692675 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" event={"ID":"916cbdfc-24b9-47e7-80a6-bb5bb409ae51","Type":"ContainerStarted","Data":"01e3b1827353ad5bf4f84620543e191fa4286a5efa456ae8149d657a438c89ff"} Jan 24 00:21:27 crc kubenswrapper[4676]: I0124 00:21:27.703564 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" event={"ID":"916cbdfc-24b9-47e7-80a6-bb5bb409ae51","Type":"ContainerStarted","Data":"691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4"} Jan 24 00:21:27 crc kubenswrapper[4676]: I0124 00:21:27.703858 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:27 crc kubenswrapper[4676]: I0124 00:21:27.728909 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" podStartSLOduration=2.728893347 podStartE2EDuration="2.728893347s" podCreationTimestamp="2026-01-24 00:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:27.727492164 +0000 UTC m=+1071.757463165" watchObservedRunningTime="2026-01-24 00:21:27.728893347 +0000 UTC m=+1071.758864348" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.101147 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kbtsc" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.192218 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-combined-ca-bundle\") pod \"7ebdc261-7c0d-49a2-9233-868fda906788\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.192275 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-config-data\") pod \"7ebdc261-7c0d-49a2-9233-868fda906788\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.192317 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-db-sync-config-data\") pod \"7ebdc261-7c0d-49a2-9233-868fda906788\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.192368 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6qmf\" (UniqueName: \"kubernetes.io/projected/7ebdc261-7c0d-49a2-9233-868fda906788-kube-api-access-t6qmf\") pod \"7ebdc261-7c0d-49a2-9233-868fda906788\" (UID: \"7ebdc261-7c0d-49a2-9233-868fda906788\") " Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.197632 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ebdc261-7c0d-49a2-9233-868fda906788-kube-api-access-t6qmf" (OuterVolumeSpecName: "kube-api-access-t6qmf") pod "7ebdc261-7c0d-49a2-9233-868fda906788" (UID: "7ebdc261-7c0d-49a2-9233-868fda906788"). InnerVolumeSpecName "kube-api-access-t6qmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.205509 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7ebdc261-7c0d-49a2-9233-868fda906788" (UID: "7ebdc261-7c0d-49a2-9233-868fda906788"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.216564 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ebdc261-7c0d-49a2-9233-868fda906788" (UID: "7ebdc261-7c0d-49a2-9233-868fda906788"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.233941 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-config-data" (OuterVolumeSpecName: "config-data") pod "7ebdc261-7c0d-49a2-9233-868fda906788" (UID: "7ebdc261-7c0d-49a2-9233-868fda906788"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.295196 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6qmf\" (UniqueName: \"kubernetes.io/projected/7ebdc261-7c0d-49a2-9233-868fda906788-kube-api-access-t6qmf\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.295226 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.295236 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.295246 4676 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ebdc261-7c0d-49a2-9233-868fda906788-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.711467 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kbtsc" event={"ID":"7ebdc261-7c0d-49a2-9233-868fda906788","Type":"ContainerDied","Data":"e532ae412cda92f465aa9e050762bb063a13c20d49e7ab63fe45cd1ee4ddc01b"} Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.711510 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e532ae412cda92f465aa9e050762bb063a13c20d49e7ab63fe45cd1ee4ddc01b" Jan 24 00:21:28 crc kubenswrapper[4676]: I0124 00:21:28.711516 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kbtsc" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.182507 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kvxlz"] Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.222994 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-p2f7j"] Jan 24 00:21:29 crc kubenswrapper[4676]: E0124 00:21:29.223288 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebdc261-7c0d-49a2-9233-868fda906788" containerName="glance-db-sync" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.223303 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebdc261-7c0d-49a2-9233-868fda906788" containerName="glance-db-sync" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.223461 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ebdc261-7c0d-49a2-9233-868fda906788" containerName="glance-db-sync" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.224187 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.248551 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-p2f7j"] Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.309854 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.309896 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.309942 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.310104 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.310273 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68mnw\" (UniqueName: \"kubernetes.io/projected/92c7f906-f96b-4d55-b771-6d6425b32b85-kube-api-access-68mnw\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.310453 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-config\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.412057 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68mnw\" (UniqueName: \"kubernetes.io/projected/92c7f906-f96b-4d55-b771-6d6425b32b85-kube-api-access-68mnw\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.412127 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-config\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.412195 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.412216 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.412245 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.412277 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.413089 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.414061 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-config\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.414644 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.415185 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.415742 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.431532 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68mnw\" (UniqueName: \"kubernetes.io/projected/92c7f906-f96b-4d55-b771-6d6425b32b85-kube-api-access-68mnw\") pod \"dnsmasq-dns-5f59b8f679-p2f7j\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.542184 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:29 crc kubenswrapper[4676]: I0124 00:21:29.718520 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" podUID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" containerName="dnsmasq-dns" containerID="cri-o://691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4" gracePeriod=10 Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.053956 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-p2f7j"] Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.209657 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.327234 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-sb\") pod \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.327579 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxwv2\" (UniqueName: \"kubernetes.io/projected/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-kube-api-access-hxwv2\") pod \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.327636 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-config\") pod \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.327681 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-svc\") pod \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.327727 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-swift-storage-0\") pod \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.327762 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-nb\") pod \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\" (UID: \"916cbdfc-24b9-47e7-80a6-bb5bb409ae51\") " Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.334091 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-kube-api-access-hxwv2" (OuterVolumeSpecName: "kube-api-access-hxwv2") pod "916cbdfc-24b9-47e7-80a6-bb5bb409ae51" (UID: "916cbdfc-24b9-47e7-80a6-bb5bb409ae51"). InnerVolumeSpecName "kube-api-access-hxwv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.391635 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "916cbdfc-24b9-47e7-80a6-bb5bb409ae51" (UID: "916cbdfc-24b9-47e7-80a6-bb5bb409ae51"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.392032 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "916cbdfc-24b9-47e7-80a6-bb5bb409ae51" (UID: "916cbdfc-24b9-47e7-80a6-bb5bb409ae51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.396561 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "916cbdfc-24b9-47e7-80a6-bb5bb409ae51" (UID: "916cbdfc-24b9-47e7-80a6-bb5bb409ae51"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.414180 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-config" (OuterVolumeSpecName: "config") pod "916cbdfc-24b9-47e7-80a6-bb5bb409ae51" (UID: "916cbdfc-24b9-47e7-80a6-bb5bb409ae51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.417386 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "916cbdfc-24b9-47e7-80a6-bb5bb409ae51" (UID: "916cbdfc-24b9-47e7-80a6-bb5bb409ae51"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.430780 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.430809 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.430821 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.430832 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxwv2\" (UniqueName: \"kubernetes.io/projected/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-kube-api-access-hxwv2\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.430841 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.430850 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/916cbdfc-24b9-47e7-80a6-bb5bb409ae51-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.754741 4676 generic.go:334] "Generic (PLEG): container finished" podID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerID="041801085cb8948d9dc29d8e4969e7d85a03615218568807e333c3bd95e75583" exitCode=0 Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.755630 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" event={"ID":"92c7f906-f96b-4d55-b771-6d6425b32b85","Type":"ContainerDied","Data":"041801085cb8948d9dc29d8e4969e7d85a03615218568807e333c3bd95e75583"} Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.755712 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" event={"ID":"92c7f906-f96b-4d55-b771-6d6425b32b85","Type":"ContainerStarted","Data":"af5442d400f3ec8197d8bc02bc11e29bed515b4e2a1c69855707d8adbf0ab335"} Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.760020 4676 generic.go:334] "Generic (PLEG): container finished" podID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" containerID="691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4" exitCode=0 Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.760055 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" event={"ID":"916cbdfc-24b9-47e7-80a6-bb5bb409ae51","Type":"ContainerDied","Data":"691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4"} Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.760082 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" event={"ID":"916cbdfc-24b9-47e7-80a6-bb5bb409ae51","Type":"ContainerDied","Data":"01e3b1827353ad5bf4f84620543e191fa4286a5efa456ae8149d657a438c89ff"} Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.760103 4676 scope.go:117] "RemoveContainer" containerID="691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.760210 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-kvxlz" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.911777 4676 scope.go:117] "RemoveContainer" containerID="4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.931687 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kvxlz"] Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.937631 4676 scope.go:117] "RemoveContainer" containerID="691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4" Jan 24 00:21:30 crc kubenswrapper[4676]: E0124 00:21:30.938154 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4\": container with ID starting with 691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4 not found: ID does not exist" containerID="691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.938183 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4"} err="failed to get container status \"691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4\": rpc error: code = NotFound desc = could not find container \"691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4\": container with ID starting with 691040f96d1b5ce6cc15046d00078df9b6ee0a523e5850e6110844a0d56b04d4 not found: ID does not exist" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.938204 4676 scope.go:117] "RemoveContainer" containerID="4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e" Jan 24 00:21:30 crc kubenswrapper[4676]: E0124 00:21:30.938566 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e\": container with ID starting with 4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e not found: ID does not exist" containerID="4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.938607 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e"} err="failed to get container status \"4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e\": rpc error: code = NotFound desc = could not find container \"4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e\": container with ID starting with 4146a6e7b1d24d3fef677ddf302a756b49304320ffdd75256ecd3013d9af408e not found: ID does not exist" Jan 24 00:21:30 crc kubenswrapper[4676]: I0124 00:21:30.940313 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kvxlz"] Jan 24 00:21:31 crc kubenswrapper[4676]: I0124 00:21:31.777061 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" event={"ID":"92c7f906-f96b-4d55-b771-6d6425b32b85","Type":"ContainerStarted","Data":"eecf36d183fa31e03de5954a2aec194cfc8588694d23fe41c8598d42bf4437a3"} Jan 24 00:21:31 crc kubenswrapper[4676]: I0124 00:21:31.777667 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:31 crc kubenswrapper[4676]: I0124 00:21:31.811796 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" podStartSLOduration=2.811772815 podStartE2EDuration="2.811772815s" podCreationTimestamp="2026-01-24 00:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:31.80867756 +0000 UTC m=+1075.838648571" watchObservedRunningTime="2026-01-24 00:21:31.811772815 +0000 UTC m=+1075.841743856" Jan 24 00:21:32 crc kubenswrapper[4676]: I0124 00:21:32.272278 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" path="/var/lib/kubelet/pods/916cbdfc-24b9-47e7-80a6-bb5bb409ae51/volumes" Jan 24 00:21:35 crc kubenswrapper[4676]: I0124 00:21:35.806579 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.121203 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-bcttg"] Jan 24 00:21:36 crc kubenswrapper[4676]: E0124 00:21:36.121738 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" containerName="dnsmasq-dns" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.121753 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" containerName="dnsmasq-dns" Jan 24 00:21:36 crc kubenswrapper[4676]: E0124 00:21:36.121783 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" containerName="init" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.121790 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" containerName="init" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.121930 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="916cbdfc-24b9-47e7-80a6-bb5bb409ae51" containerName="dnsmasq-dns" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.122389 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.136329 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bcttg"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.240322 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-b4lc5"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.241407 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.252946 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d058-account-create-update-pjzm7"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.253951 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.256073 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.283207 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d058-account-create-update-pjzm7"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.283232 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-b4lc5"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.288536 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b488c3f8-bc04-4c8d-98b7-72145ae9e948-operator-scripts\") pod \"cinder-db-create-bcttg\" (UID: \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\") " pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.288667 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnv8\" (UniqueName: \"kubernetes.io/projected/b488c3f8-bc04-4c8d-98b7-72145ae9e948-kube-api-access-4mnv8\") pod \"cinder-db-create-bcttg\" (UID: \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\") " pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.390365 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mnv8\" (UniqueName: \"kubernetes.io/projected/b488c3f8-bc04-4c8d-98b7-72145ae9e948-kube-api-access-4mnv8\") pod \"cinder-db-create-bcttg\" (UID: \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\") " pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.390432 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwnv8\" (UniqueName: \"kubernetes.io/projected/64a07672-60fb-4935-83e2-99f39e15427f-kube-api-access-qwnv8\") pod \"cinder-d058-account-create-update-pjzm7\" (UID: \"64a07672-60fb-4935-83e2-99f39e15427f\") " pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.390459 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-operator-scripts\") pod \"barbican-db-create-b4lc5\" (UID: \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\") " pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.390487 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmv6c\" (UniqueName: \"kubernetes.io/projected/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-kube-api-access-vmv6c\") pod \"barbican-db-create-b4lc5\" (UID: \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\") " pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.390550 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b488c3f8-bc04-4c8d-98b7-72145ae9e948-operator-scripts\") pod \"cinder-db-create-bcttg\" (UID: \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\") " pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.390566 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a07672-60fb-4935-83e2-99f39e15427f-operator-scripts\") pod \"cinder-d058-account-create-update-pjzm7\" (UID: \"64a07672-60fb-4935-83e2-99f39e15427f\") " pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.391790 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b488c3f8-bc04-4c8d-98b7-72145ae9e948-operator-scripts\") pod \"cinder-db-create-bcttg\" (UID: \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\") " pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.417716 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mnv8\" (UniqueName: \"kubernetes.io/projected/b488c3f8-bc04-4c8d-98b7-72145ae9e948-kube-api-access-4mnv8\") pod \"cinder-db-create-bcttg\" (UID: \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\") " pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.440308 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mtdlz"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.441489 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.444476 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.471658 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mtdlz"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.494166 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwnv8\" (UniqueName: \"kubernetes.io/projected/64a07672-60fb-4935-83e2-99f39e15427f-kube-api-access-qwnv8\") pod \"cinder-d058-account-create-update-pjzm7\" (UID: \"64a07672-60fb-4935-83e2-99f39e15427f\") " pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.494228 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-operator-scripts\") pod \"barbican-db-create-b4lc5\" (UID: \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\") " pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.494271 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmv6c\" (UniqueName: \"kubernetes.io/projected/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-kube-api-access-vmv6c\") pod \"barbican-db-create-b4lc5\" (UID: \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\") " pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.494329 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a07672-60fb-4935-83e2-99f39e15427f-operator-scripts\") pod \"cinder-d058-account-create-update-pjzm7\" (UID: \"64a07672-60fb-4935-83e2-99f39e15427f\") " pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.495178 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a07672-60fb-4935-83e2-99f39e15427f-operator-scripts\") pod \"cinder-d058-account-create-update-pjzm7\" (UID: \"64a07672-60fb-4935-83e2-99f39e15427f\") " pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.495944 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-operator-scripts\") pod \"barbican-db-create-b4lc5\" (UID: \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\") " pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.521222 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwnv8\" (UniqueName: \"kubernetes.io/projected/64a07672-60fb-4935-83e2-99f39e15427f-kube-api-access-qwnv8\") pod \"cinder-d058-account-create-update-pjzm7\" (UID: \"64a07672-60fb-4935-83e2-99f39e15427f\") " pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.539904 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4766-account-create-update-bbg95"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.540815 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.547102 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.549004 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmv6c\" (UniqueName: \"kubernetes.io/projected/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-kube-api-access-vmv6c\") pod \"barbican-db-create-b4lc5\" (UID: \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\") " pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.570267 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.572260 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4766-account-create-update-bbg95"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.591008 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.597459 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4fb3ef-0cde-499c-8018-14cc96b495f5-operator-scripts\") pod \"neutron-db-create-mtdlz\" (UID: \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\") " pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.597515 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw24b\" (UniqueName: \"kubernetes.io/projected/2d4fb3ef-0cde-499c-8018-14cc96b495f5-kube-api-access-zw24b\") pod \"neutron-db-create-mtdlz\" (UID: \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\") " pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.679307 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wg6tc"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.684972 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wg6tc"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.685083 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.688737 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fcgg9" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.688915 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.691766 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.695072 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.701418 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-operator-scripts\") pod \"barbican-4766-account-create-update-bbg95\" (UID: \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\") " pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.701460 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4fb3ef-0cde-499c-8018-14cc96b495f5-operator-scripts\") pod \"neutron-db-create-mtdlz\" (UID: \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\") " pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.701500 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw24b\" (UniqueName: \"kubernetes.io/projected/2d4fb3ef-0cde-499c-8018-14cc96b495f5-kube-api-access-zw24b\") pod \"neutron-db-create-mtdlz\" (UID: \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\") " pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.701550 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stcpz\" (UniqueName: \"kubernetes.io/projected/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-kube-api-access-stcpz\") pod \"barbican-4766-account-create-update-bbg95\" (UID: \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\") " pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.710684 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4fb3ef-0cde-499c-8018-14cc96b495f5-operator-scripts\") pod \"neutron-db-create-mtdlz\" (UID: \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\") " pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.726834 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw24b\" (UniqueName: \"kubernetes.io/projected/2d4fb3ef-0cde-499c-8018-14cc96b495f5-kube-api-access-zw24b\") pod \"neutron-db-create-mtdlz\" (UID: \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\") " pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.745975 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ba07-account-create-update-hw2sx"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.746908 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.761424 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.771206 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ba07-account-create-update-hw2sx"] Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.805556 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhdgs\" (UniqueName: \"kubernetes.io/projected/30027bc7-2637-4abb-9568-1190cf80a3a5-kube-api-access-bhdgs\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.805632 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stcpz\" (UniqueName: \"kubernetes.io/projected/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-kube-api-access-stcpz\") pod \"barbican-4766-account-create-update-bbg95\" (UID: \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\") " pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.805686 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ace7871-3944-4c72-980f-c9d5e7d65c71-operator-scripts\") pod \"neutron-ba07-account-create-update-hw2sx\" (UID: \"9ace7871-3944-4c72-980f-c9d5e7d65c71\") " pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.805733 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-config-data\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.805759 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-combined-ca-bundle\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.805779 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb6ck\" (UniqueName: \"kubernetes.io/projected/9ace7871-3944-4c72-980f-c9d5e7d65c71-kube-api-access-qb6ck\") pod \"neutron-ba07-account-create-update-hw2sx\" (UID: \"9ace7871-3944-4c72-980f-c9d5e7d65c71\") " pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.805808 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-operator-scripts\") pod \"barbican-4766-account-create-update-bbg95\" (UID: \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\") " pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.806470 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-operator-scripts\") pod \"barbican-4766-account-create-update-bbg95\" (UID: \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\") " pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.836891 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stcpz\" (UniqueName: \"kubernetes.io/projected/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-kube-api-access-stcpz\") pod \"barbican-4766-account-create-update-bbg95\" (UID: \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\") " pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.901177 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.908024 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-config-data\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.908070 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-combined-ca-bundle\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.908113 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb6ck\" (UniqueName: \"kubernetes.io/projected/9ace7871-3944-4c72-980f-c9d5e7d65c71-kube-api-access-qb6ck\") pod \"neutron-ba07-account-create-update-hw2sx\" (UID: \"9ace7871-3944-4c72-980f-c9d5e7d65c71\") " pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.908162 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhdgs\" (UniqueName: \"kubernetes.io/projected/30027bc7-2637-4abb-9568-1190cf80a3a5-kube-api-access-bhdgs\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.908214 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ace7871-3944-4c72-980f-c9d5e7d65c71-operator-scripts\") pod \"neutron-ba07-account-create-update-hw2sx\" (UID: \"9ace7871-3944-4c72-980f-c9d5e7d65c71\") " pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.908811 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ace7871-3944-4c72-980f-c9d5e7d65c71-operator-scripts\") pod \"neutron-ba07-account-create-update-hw2sx\" (UID: \"9ace7871-3944-4c72-980f-c9d5e7d65c71\") " pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.913280 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-combined-ca-bundle\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.913306 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-config-data\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.941322 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhdgs\" (UniqueName: \"kubernetes.io/projected/30027bc7-2637-4abb-9568-1190cf80a3a5-kube-api-access-bhdgs\") pod \"keystone-db-sync-wg6tc\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.952737 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb6ck\" (UniqueName: \"kubernetes.io/projected/9ace7871-3944-4c72-980f-c9d5e7d65c71-kube-api-access-qb6ck\") pod \"neutron-ba07-account-create-update-hw2sx\" (UID: \"9ace7871-3944-4c72-980f-c9d5e7d65c71\") " pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:36 crc kubenswrapper[4676]: I0124 00:21:36.996902 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.015514 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.058896 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bcttg"] Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.207790 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.393362 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d058-account-create-update-pjzm7"] Jan 24 00:21:37 crc kubenswrapper[4676]: W0124 00:21:37.406942 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64a07672_60fb_4935_83e2_99f39e15427f.slice/crio-aa778d2b9641cb4a032b28b9537d81b843cc0344050ae97830c3a489d9508916 WatchSource:0}: Error finding container aa778d2b9641cb4a032b28b9537d81b843cc0344050ae97830c3a489d9508916: Status 404 returned error can't find the container with id aa778d2b9641cb4a032b28b9537d81b843cc0344050ae97830c3a489d9508916 Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.420367 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-b4lc5"] Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.506302 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4766-account-create-update-bbg95"] Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.561906 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wg6tc"] Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.580587 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mtdlz"] Jan 24 00:21:37 crc kubenswrapper[4676]: W0124 00:21:37.596580 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30027bc7_2637_4abb_9568_1190cf80a3a5.slice/crio-3b055c8876511446c351694e38bb4288f579441b122767bc434100cd495602c9 WatchSource:0}: Error finding container 3b055c8876511446c351694e38bb4288f579441b122767bc434100cd495602c9: Status 404 returned error can't find the container with id 3b055c8876511446c351694e38bb4288f579441b122767bc434100cd495602c9 Jan 24 00:21:37 crc kubenswrapper[4676]: W0124 00:21:37.611922 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d4fb3ef_0cde_499c_8018_14cc96b495f5.slice/crio-ca93660ad2310b325562d8e1908e400fca734f442cb3fbc769a5593ef16a799a WatchSource:0}: Error finding container ca93660ad2310b325562d8e1908e400fca734f442cb3fbc769a5593ef16a799a: Status 404 returned error can't find the container with id ca93660ad2310b325562d8e1908e400fca734f442cb3fbc769a5593ef16a799a Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.836063 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wg6tc" event={"ID":"30027bc7-2637-4abb-9568-1190cf80a3a5","Type":"ContainerStarted","Data":"3b055c8876511446c351694e38bb4288f579441b122767bc434100cd495602c9"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.838585 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4766-account-create-update-bbg95" event={"ID":"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09","Type":"ContainerStarted","Data":"c0f6b7c08c73c5cd3ba2389c70cde6261bfdc95261dc1f7fcb14c46b9908056d"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.838637 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4766-account-create-update-bbg95" event={"ID":"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09","Type":"ContainerStarted","Data":"497443544a8b1e5d2264898469d70d9d8eb5bc684f16d3e8ec4d93ed1fa60146"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.841642 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mtdlz" event={"ID":"2d4fb3ef-0cde-499c-8018-14cc96b495f5","Type":"ContainerStarted","Data":"b062d260d4a7b754ef8f1768435c8c0945467373bcd6d26d2a060794a30067d3"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.841673 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mtdlz" event={"ID":"2d4fb3ef-0cde-499c-8018-14cc96b495f5","Type":"ContainerStarted","Data":"ca93660ad2310b325562d8e1908e400fca734f442cb3fbc769a5593ef16a799a"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.846385 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d058-account-create-update-pjzm7" event={"ID":"64a07672-60fb-4935-83e2-99f39e15427f","Type":"ContainerStarted","Data":"acb8fc5743669070f34cea9944d1cdf719e2212dfd00759a3d455ff74f1aa3b7"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.846446 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d058-account-create-update-pjzm7" event={"ID":"64a07672-60fb-4935-83e2-99f39e15427f","Type":"ContainerStarted","Data":"aa778d2b9641cb4a032b28b9537d81b843cc0344050ae97830c3a489d9508916"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.848132 4676 generic.go:334] "Generic (PLEG): container finished" podID="b488c3f8-bc04-4c8d-98b7-72145ae9e948" containerID="89ec371006cc8981cf4d3c50f26065f195057a546aba2faa19252efcd286b309" exitCode=0 Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.848203 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bcttg" event={"ID":"b488c3f8-bc04-4c8d-98b7-72145ae9e948","Type":"ContainerDied","Data":"89ec371006cc8981cf4d3c50f26065f195057a546aba2faa19252efcd286b309"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.848354 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bcttg" event={"ID":"b488c3f8-bc04-4c8d-98b7-72145ae9e948","Type":"ContainerStarted","Data":"96d1f38596ab4c33a9c07bd749a1c1fae3129b0d58a93768d2af5c2ff4e39a89"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.850139 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b4lc5" event={"ID":"64efcf33-9ffe-402d-b0b0-9cf53c7a5495","Type":"ContainerStarted","Data":"bfee0ea90c79ca47b4f4a9473a613bbee236021e6a49db23a4490b8cde0e9a39"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.850222 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b4lc5" event={"ID":"64efcf33-9ffe-402d-b0b0-9cf53c7a5495","Type":"ContainerStarted","Data":"e89a4d04ab1c50604d49d0624984dbc3a381642b8ef5d82c2bf3da9607b0c956"} Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.854670 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-4766-account-create-update-bbg95" podStartSLOduration=1.8546523019999999 podStartE2EDuration="1.854652302s" podCreationTimestamp="2026-01-24 00:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:37.851137965 +0000 UTC m=+1081.881108966" watchObservedRunningTime="2026-01-24 00:21:37.854652302 +0000 UTC m=+1081.884623303" Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.881741 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mtdlz" podStartSLOduration=1.881722911 podStartE2EDuration="1.881722911s" podCreationTimestamp="2026-01-24 00:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:37.876966175 +0000 UTC m=+1081.906937166" watchObservedRunningTime="2026-01-24 00:21:37.881722911 +0000 UTC m=+1081.911693912" Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.902559 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-b4lc5" podStartSLOduration=1.9025422779999999 podStartE2EDuration="1.902542278s" podCreationTimestamp="2026-01-24 00:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:37.896237805 +0000 UTC m=+1081.926208806" watchObservedRunningTime="2026-01-24 00:21:37.902542278 +0000 UTC m=+1081.932513279" Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.919156 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d058-account-create-update-pjzm7" podStartSLOduration=1.919139946 podStartE2EDuration="1.919139946s" podCreationTimestamp="2026-01-24 00:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:37.914192505 +0000 UTC m=+1081.944163506" watchObservedRunningTime="2026-01-24 00:21:37.919139946 +0000 UTC m=+1081.949110947" Jan 24 00:21:37 crc kubenswrapper[4676]: I0124 00:21:37.933391 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ba07-account-create-update-hw2sx"] Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.889976 4676 generic.go:334] "Generic (PLEG): container finished" podID="9ace7871-3944-4c72-980f-c9d5e7d65c71" containerID="c87e9f17f624e372e522a3229d29135c0ab3c4953ce39a4a0c79e4002fe69658" exitCode=0 Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.890335 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba07-account-create-update-hw2sx" event={"ID":"9ace7871-3944-4c72-980f-c9d5e7d65c71","Type":"ContainerDied","Data":"c87e9f17f624e372e522a3229d29135c0ab3c4953ce39a4a0c79e4002fe69658"} Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.890362 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba07-account-create-update-hw2sx" event={"ID":"9ace7871-3944-4c72-980f-c9d5e7d65c71","Type":"ContainerStarted","Data":"7d5bd01e9496b9422e6f2c0b4d3d95295f8f687c97766a57ad1bfe5c28074b14"} Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.891997 4676 generic.go:334] "Generic (PLEG): container finished" podID="a61f15c0-2fa1-4d69-bb80-471d6e4e7e09" containerID="c0f6b7c08c73c5cd3ba2389c70cde6261bfdc95261dc1f7fcb14c46b9908056d" exitCode=0 Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.892043 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4766-account-create-update-bbg95" event={"ID":"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09","Type":"ContainerDied","Data":"c0f6b7c08c73c5cd3ba2389c70cde6261bfdc95261dc1f7fcb14c46b9908056d"} Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.912678 4676 generic.go:334] "Generic (PLEG): container finished" podID="2d4fb3ef-0cde-499c-8018-14cc96b495f5" containerID="b062d260d4a7b754ef8f1768435c8c0945467373bcd6d26d2a060794a30067d3" exitCode=0 Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.912962 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mtdlz" event={"ID":"2d4fb3ef-0cde-499c-8018-14cc96b495f5","Type":"ContainerDied","Data":"b062d260d4a7b754ef8f1768435c8c0945467373bcd6d26d2a060794a30067d3"} Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.948075 4676 generic.go:334] "Generic (PLEG): container finished" podID="64a07672-60fb-4935-83e2-99f39e15427f" containerID="acb8fc5743669070f34cea9944d1cdf719e2212dfd00759a3d455ff74f1aa3b7" exitCode=0 Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.948177 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d058-account-create-update-pjzm7" event={"ID":"64a07672-60fb-4935-83e2-99f39e15427f","Type":"ContainerDied","Data":"acb8fc5743669070f34cea9944d1cdf719e2212dfd00759a3d455ff74f1aa3b7"} Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.963546 4676 generic.go:334] "Generic (PLEG): container finished" podID="64efcf33-9ffe-402d-b0b0-9cf53c7a5495" containerID="bfee0ea90c79ca47b4f4a9473a613bbee236021e6a49db23a4490b8cde0e9a39" exitCode=0 Jan 24 00:21:38 crc kubenswrapper[4676]: I0124 00:21:38.963900 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b4lc5" event={"ID":"64efcf33-9ffe-402d-b0b0-9cf53c7a5495","Type":"ContainerDied","Data":"bfee0ea90c79ca47b4f4a9473a613bbee236021e6a49db23a4490b8cde0e9a39"} Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.367076 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.456088 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b488c3f8-bc04-4c8d-98b7-72145ae9e948-operator-scripts\") pod \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\" (UID: \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\") " Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.456802 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b488c3f8-bc04-4c8d-98b7-72145ae9e948-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b488c3f8-bc04-4c8d-98b7-72145ae9e948" (UID: "b488c3f8-bc04-4c8d-98b7-72145ae9e948"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.457019 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mnv8\" (UniqueName: \"kubernetes.io/projected/b488c3f8-bc04-4c8d-98b7-72145ae9e948-kube-api-access-4mnv8\") pod \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\" (UID: \"b488c3f8-bc04-4c8d-98b7-72145ae9e948\") " Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.458246 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b488c3f8-bc04-4c8d-98b7-72145ae9e948-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.462585 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b488c3f8-bc04-4c8d-98b7-72145ae9e948-kube-api-access-4mnv8" (OuterVolumeSpecName: "kube-api-access-4mnv8") pod "b488c3f8-bc04-4c8d-98b7-72145ae9e948" (UID: "b488c3f8-bc04-4c8d-98b7-72145ae9e948"). InnerVolumeSpecName "kube-api-access-4mnv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.544445 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.562556 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mnv8\" (UniqueName: \"kubernetes.io/projected/b488c3f8-bc04-4c8d-98b7-72145ae9e948-kube-api-access-4mnv8\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.611066 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-88xjq"] Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.611506 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" podUID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerName="dnsmasq-dns" containerID="cri-o://1d6d0279f417026b6ba6bd78b2b2fd59bc265f683401ca01652ddd44d6b071a2" gracePeriod=10 Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.974029 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bcttg" event={"ID":"b488c3f8-bc04-4c8d-98b7-72145ae9e948","Type":"ContainerDied","Data":"96d1f38596ab4c33a9c07bd749a1c1fae3129b0d58a93768d2af5c2ff4e39a89"} Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.974080 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d1f38596ab4c33a9c07bd749a1c1fae3129b0d58a93768d2af5c2ff4e39a89" Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.974150 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bcttg" Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.976948 4676 generic.go:334] "Generic (PLEG): container finished" podID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerID="1d6d0279f417026b6ba6bd78b2b2fd59bc265f683401ca01652ddd44d6b071a2" exitCode=0 Jan 24 00:21:39 crc kubenswrapper[4676]: I0124 00:21:39.977210 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" event={"ID":"1905ca79-a4c4-4286-8d88-2855e7b9ba4c","Type":"ContainerDied","Data":"1d6d0279f417026b6ba6bd78b2b2fd59bc265f683401ca01652ddd44d6b071a2"} Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.405022 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.413962 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.452476 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ace7871-3944-4c72-980f-c9d5e7d65c71-operator-scripts\") pod \"9ace7871-3944-4c72-980f-c9d5e7d65c71\" (UID: \"9ace7871-3944-4c72-980f-c9d5e7d65c71\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.452552 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb6ck\" (UniqueName: \"kubernetes.io/projected/9ace7871-3944-4c72-980f-c9d5e7d65c71-kube-api-access-qb6ck\") pod \"9ace7871-3944-4c72-980f-c9d5e7d65c71\" (UID: \"9ace7871-3944-4c72-980f-c9d5e7d65c71\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.452595 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stcpz\" (UniqueName: \"kubernetes.io/projected/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-kube-api-access-stcpz\") pod \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\" (UID: \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.452647 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-operator-scripts\") pod \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\" (UID: \"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.454160 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a61f15c0-2fa1-4d69-bb80-471d6e4e7e09" (UID: "a61f15c0-2fa1-4d69-bb80-471d6e4e7e09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.454863 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ace7871-3944-4c72-980f-c9d5e7d65c71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ace7871-3944-4c72-980f-c9d5e7d65c71" (UID: "9ace7871-3944-4c72-980f-c9d5e7d65c71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.464593 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-kube-api-access-stcpz" (OuterVolumeSpecName: "kube-api-access-stcpz") pod "a61f15c0-2fa1-4d69-bb80-471d6e4e7e09" (UID: "a61f15c0-2fa1-4d69-bb80-471d6e4e7e09"). InnerVolumeSpecName "kube-api-access-stcpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.464629 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ace7871-3944-4c72-980f-c9d5e7d65c71-kube-api-access-qb6ck" (OuterVolumeSpecName: "kube-api-access-qb6ck") pod "9ace7871-3944-4c72-980f-c9d5e7d65c71" (UID: "9ace7871-3944-4c72-980f-c9d5e7d65c71"). InnerVolumeSpecName "kube-api-access-qb6ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.469434 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.474424 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.540850 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.550918 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.556425 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.556456 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ace7871-3944-4c72-980f-c9d5e7d65c71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.556469 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb6ck\" (UniqueName: \"kubernetes.io/projected/9ace7871-3944-4c72-980f-c9d5e7d65c71-kube-api-access-qb6ck\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.556481 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stcpz\" (UniqueName: \"kubernetes.io/projected/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09-kube-api-access-stcpz\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.657945 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4fb3ef-0cde-499c-8018-14cc96b495f5-operator-scripts\") pod \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\" (UID: \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658010 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-operator-scripts\") pod \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\" (UID: \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658042 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpslv\" (UniqueName: \"kubernetes.io/projected/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-kube-api-access-lpslv\") pod \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658081 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-dns-svc\") pod \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658117 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-nb\") pod \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658195 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-config\") pod \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658253 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmv6c\" (UniqueName: \"kubernetes.io/projected/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-kube-api-access-vmv6c\") pod \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\" (UID: \"64efcf33-9ffe-402d-b0b0-9cf53c7a5495\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658283 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwnv8\" (UniqueName: \"kubernetes.io/projected/64a07672-60fb-4935-83e2-99f39e15427f-kube-api-access-qwnv8\") pod \"64a07672-60fb-4935-83e2-99f39e15427f\" (UID: \"64a07672-60fb-4935-83e2-99f39e15427f\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658314 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a07672-60fb-4935-83e2-99f39e15427f-operator-scripts\") pod \"64a07672-60fb-4935-83e2-99f39e15427f\" (UID: \"64a07672-60fb-4935-83e2-99f39e15427f\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658358 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw24b\" (UniqueName: \"kubernetes.io/projected/2d4fb3ef-0cde-499c-8018-14cc96b495f5-kube-api-access-zw24b\") pod \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\" (UID: \"2d4fb3ef-0cde-499c-8018-14cc96b495f5\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.658456 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-sb\") pod \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\" (UID: \"1905ca79-a4c4-4286-8d88-2855e7b9ba4c\") " Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.659082 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4fb3ef-0cde-499c-8018-14cc96b495f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d4fb3ef-0cde-499c-8018-14cc96b495f5" (UID: "2d4fb3ef-0cde-499c-8018-14cc96b495f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.659431 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64efcf33-9ffe-402d-b0b0-9cf53c7a5495" (UID: "64efcf33-9ffe-402d-b0b0-9cf53c7a5495"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.659664 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a07672-60fb-4935-83e2-99f39e15427f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64a07672-60fb-4935-83e2-99f39e15427f" (UID: "64a07672-60fb-4935-83e2-99f39e15427f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.662804 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4fb3ef-0cde-499c-8018-14cc96b495f5-kube-api-access-zw24b" (OuterVolumeSpecName: "kube-api-access-zw24b") pod "2d4fb3ef-0cde-499c-8018-14cc96b495f5" (UID: "2d4fb3ef-0cde-499c-8018-14cc96b495f5"). InnerVolumeSpecName "kube-api-access-zw24b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.663880 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-kube-api-access-vmv6c" (OuterVolumeSpecName: "kube-api-access-vmv6c") pod "64efcf33-9ffe-402d-b0b0-9cf53c7a5495" (UID: "64efcf33-9ffe-402d-b0b0-9cf53c7a5495"). InnerVolumeSpecName "kube-api-access-vmv6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.665897 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-kube-api-access-lpslv" (OuterVolumeSpecName: "kube-api-access-lpslv") pod "1905ca79-a4c4-4286-8d88-2855e7b9ba4c" (UID: "1905ca79-a4c4-4286-8d88-2855e7b9ba4c"). InnerVolumeSpecName "kube-api-access-lpslv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.672482 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a07672-60fb-4935-83e2-99f39e15427f-kube-api-access-qwnv8" (OuterVolumeSpecName: "kube-api-access-qwnv8") pod "64a07672-60fb-4935-83e2-99f39e15427f" (UID: "64a07672-60fb-4935-83e2-99f39e15427f"). InnerVolumeSpecName "kube-api-access-qwnv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.702643 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1905ca79-a4c4-4286-8d88-2855e7b9ba4c" (UID: "1905ca79-a4c4-4286-8d88-2855e7b9ba4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.706085 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1905ca79-a4c4-4286-8d88-2855e7b9ba4c" (UID: "1905ca79-a4c4-4286-8d88-2855e7b9ba4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.709150 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-config" (OuterVolumeSpecName: "config") pod "1905ca79-a4c4-4286-8d88-2855e7b9ba4c" (UID: "1905ca79-a4c4-4286-8d88-2855e7b9ba4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.717854 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1905ca79-a4c4-4286-8d88-2855e7b9ba4c" (UID: "1905ca79-a4c4-4286-8d88-2855e7b9ba4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760175 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a07672-60fb-4935-83e2-99f39e15427f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760212 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw24b\" (UniqueName: \"kubernetes.io/projected/2d4fb3ef-0cde-499c-8018-14cc96b495f5-kube-api-access-zw24b\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760227 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760239 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d4fb3ef-0cde-499c-8018-14cc96b495f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760251 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760262 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpslv\" (UniqueName: \"kubernetes.io/projected/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-kube-api-access-lpslv\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760277 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760289 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760300 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905ca79-a4c4-4286-8d88-2855e7b9ba4c-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760313 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmv6c\" (UniqueName: \"kubernetes.io/projected/64efcf33-9ffe-402d-b0b0-9cf53c7a5495-kube-api-access-vmv6c\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:44 crc kubenswrapper[4676]: I0124 00:21:44.760325 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwnv8\" (UniqueName: \"kubernetes.io/projected/64a07672-60fb-4935-83e2-99f39e15427f-kube-api-access-qwnv8\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.029876 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mtdlz" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.030088 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mtdlz" event={"ID":"2d4fb3ef-0cde-499c-8018-14cc96b495f5","Type":"ContainerDied","Data":"ca93660ad2310b325562d8e1908e400fca734f442cb3fbc769a5593ef16a799a"} Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.030333 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca93660ad2310b325562d8e1908e400fca734f442cb3fbc769a5593ef16a799a" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.032095 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d058-account-create-update-pjzm7" event={"ID":"64a07672-60fb-4935-83e2-99f39e15427f","Type":"ContainerDied","Data":"aa778d2b9641cb4a032b28b9537d81b843cc0344050ae97830c3a489d9508916"} Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.032139 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa778d2b9641cb4a032b28b9537d81b843cc0344050ae97830c3a489d9508916" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.032219 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d058-account-create-update-pjzm7" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.037015 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ba07-account-create-update-hw2sx" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.037477 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ba07-account-create-update-hw2sx" event={"ID":"9ace7871-3944-4c72-980f-c9d5e7d65c71","Type":"ContainerDied","Data":"7d5bd01e9496b9422e6f2c0b4d3d95295f8f687c97766a57ad1bfe5c28074b14"} Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.037600 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d5bd01e9496b9422e6f2c0b4d3d95295f8f687c97766a57ad1bfe5c28074b14" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.038976 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b4lc5" event={"ID":"64efcf33-9ffe-402d-b0b0-9cf53c7a5495","Type":"ContainerDied","Data":"e89a4d04ab1c50604d49d0624984dbc3a381642b8ef5d82c2bf3da9607b0c956"} Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.039074 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e89a4d04ab1c50604d49d0624984dbc3a381642b8ef5d82c2bf3da9607b0c956" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.039077 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b4lc5" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.043966 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wg6tc" event={"ID":"30027bc7-2637-4abb-9568-1190cf80a3a5","Type":"ContainerStarted","Data":"2562adc9221a7b111bd58848de15d9abcd709528ca168c846292e16b34c5154e"} Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.048882 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" event={"ID":"1905ca79-a4c4-4286-8d88-2855e7b9ba4c","Type":"ContainerDied","Data":"b390252d36e77eb3a82a5578bfec5a23efd8e93f3aaecb313b5f2fd7e776552d"} Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.048940 4676 scope.go:117] "RemoveContainer" containerID="1d6d0279f417026b6ba6bd78b2b2fd59bc265f683401ca01652ddd44d6b071a2" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.048940 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.052579 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4766-account-create-update-bbg95" event={"ID":"a61f15c0-2fa1-4d69-bb80-471d6e4e7e09","Type":"ContainerDied","Data":"497443544a8b1e5d2264898469d70d9d8eb5bc684f16d3e8ec4d93ed1fa60146"} Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.052606 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497443544a8b1e5d2264898469d70d9d8eb5bc684f16d3e8ec4d93ed1fa60146" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.052653 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4766-account-create-update-bbg95" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.077619 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wg6tc" podStartSLOduration=2.539306867 podStartE2EDuration="9.077601665s" podCreationTimestamp="2026-01-24 00:21:36 +0000 UTC" firstStartedPulling="2026-01-24 00:21:37.664499909 +0000 UTC m=+1081.694470910" lastFinishedPulling="2026-01-24 00:21:44.202794697 +0000 UTC m=+1088.232765708" observedRunningTime="2026-01-24 00:21:45.065800863 +0000 UTC m=+1089.095771874" watchObservedRunningTime="2026-01-24 00:21:45.077601665 +0000 UTC m=+1089.107572676" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.093807 4676 scope.go:117] "RemoveContainer" containerID="837bffae0bd65cfec1fbd0e46c064b367c74a4b7d5a87b7e24ab722d34c34bd4" Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.177038 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-88xjq"] Jan 24 00:21:45 crc kubenswrapper[4676]: I0124 00:21:45.183197 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-88xjq"] Jan 24 00:21:46 crc kubenswrapper[4676]: I0124 00:21:46.173706 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-88xjq" podUID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 24 00:21:46 crc kubenswrapper[4676]: I0124 00:21:46.274743 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" path="/var/lib/kubelet/pods/1905ca79-a4c4-4286-8d88-2855e7b9ba4c/volumes" Jan 24 00:21:48 crc kubenswrapper[4676]: I0124 00:21:48.084360 4676 generic.go:334] "Generic (PLEG): container finished" podID="30027bc7-2637-4abb-9568-1190cf80a3a5" containerID="2562adc9221a7b111bd58848de15d9abcd709528ca168c846292e16b34c5154e" exitCode=0 Jan 24 00:21:48 crc kubenswrapper[4676]: I0124 00:21:48.085254 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wg6tc" event={"ID":"30027bc7-2637-4abb-9568-1190cf80a3a5","Type":"ContainerDied","Data":"2562adc9221a7b111bd58848de15d9abcd709528ca168c846292e16b34c5154e"} Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.481948 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.649750 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhdgs\" (UniqueName: \"kubernetes.io/projected/30027bc7-2637-4abb-9568-1190cf80a3a5-kube-api-access-bhdgs\") pod \"30027bc7-2637-4abb-9568-1190cf80a3a5\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.650099 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-config-data\") pod \"30027bc7-2637-4abb-9568-1190cf80a3a5\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.650823 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-combined-ca-bundle\") pod \"30027bc7-2637-4abb-9568-1190cf80a3a5\" (UID: \"30027bc7-2637-4abb-9568-1190cf80a3a5\") " Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.656039 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30027bc7-2637-4abb-9568-1190cf80a3a5-kube-api-access-bhdgs" (OuterVolumeSpecName: "kube-api-access-bhdgs") pod "30027bc7-2637-4abb-9568-1190cf80a3a5" (UID: "30027bc7-2637-4abb-9568-1190cf80a3a5"). InnerVolumeSpecName "kube-api-access-bhdgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.681101 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30027bc7-2637-4abb-9568-1190cf80a3a5" (UID: "30027bc7-2637-4abb-9568-1190cf80a3a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.699062 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-config-data" (OuterVolumeSpecName: "config-data") pod "30027bc7-2637-4abb-9568-1190cf80a3a5" (UID: "30027bc7-2637-4abb-9568-1190cf80a3a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.751937 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhdgs\" (UniqueName: \"kubernetes.io/projected/30027bc7-2637-4abb-9568-1190cf80a3a5-kube-api-access-bhdgs\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.751975 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:49 crc kubenswrapper[4676]: I0124 00:21:49.751990 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30027bc7-2637-4abb-9568-1190cf80a3a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.110880 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wg6tc" event={"ID":"30027bc7-2637-4abb-9568-1190cf80a3a5","Type":"ContainerDied","Data":"3b055c8876511446c351694e38bb4288f579441b122767bc434100cd495602c9"} Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.111190 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b055c8876511446c351694e38bb4288f579441b122767bc434100cd495602c9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.110997 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wg6tc" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509471 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-z42h9"] Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509817 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a07672-60fb-4935-83e2-99f39e15427f" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509829 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a07672-60fb-4935-83e2-99f39e15427f" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509843 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerName="init" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509849 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerName="init" Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509863 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerName="dnsmasq-dns" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509869 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerName="dnsmasq-dns" Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509876 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61f15c0-2fa1-4d69-bb80-471d6e4e7e09" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509882 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61f15c0-2fa1-4d69-bb80-471d6e4e7e09" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509899 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4fb3ef-0cde-499c-8018-14cc96b495f5" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509904 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4fb3ef-0cde-499c-8018-14cc96b495f5" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509921 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64efcf33-9ffe-402d-b0b0-9cf53c7a5495" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509926 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="64efcf33-9ffe-402d-b0b0-9cf53c7a5495" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509934 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ace7871-3944-4c72-980f-c9d5e7d65c71" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509940 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ace7871-3944-4c72-980f-c9d5e7d65c71" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509949 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30027bc7-2637-4abb-9568-1190cf80a3a5" containerName="keystone-db-sync" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509969 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="30027bc7-2637-4abb-9568-1190cf80a3a5" containerName="keystone-db-sync" Jan 24 00:21:50 crc kubenswrapper[4676]: E0124 00:21:50.509983 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b488c3f8-bc04-4c8d-98b7-72145ae9e948" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.509990 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="b488c3f8-bc04-4c8d-98b7-72145ae9e948" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510122 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ace7871-3944-4c72-980f-c9d5e7d65c71" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510135 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="b488c3f8-bc04-4c8d-98b7-72145ae9e948" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510147 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="30027bc7-2637-4abb-9568-1190cf80a3a5" containerName="keystone-db-sync" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510156 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61f15c0-2fa1-4d69-bb80-471d6e4e7e09" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510166 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="1905ca79-a4c4-4286-8d88-2855e7b9ba4c" containerName="dnsmasq-dns" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510176 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a07672-60fb-4935-83e2-99f39e15427f" containerName="mariadb-account-create-update" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510187 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4fb3ef-0cde-499c-8018-14cc96b495f5" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510198 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="64efcf33-9ffe-402d-b0b0-9cf53c7a5495" containerName="mariadb-database-create" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.510976 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.516273 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dbwnn"] Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.517185 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.525213 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.525559 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.525574 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.525741 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.536173 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fcgg9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.549273 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-z42h9"] Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.561448 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dbwnn"] Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.568986 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-config-data\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.569031 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-credential-keys\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.569073 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhg5\" (UniqueName: \"kubernetes.io/projected/158d259b-be19-41c8-af3a-a84a3efeb214-kube-api-access-8qhg5\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.569167 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-fernet-keys\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.569857 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-scripts\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.569921 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-combined-ca-bundle\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671005 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-config-data\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671054 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-credential-keys\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671085 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671110 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qhg5\" (UniqueName: \"kubernetes.io/projected/158d259b-be19-41c8-af3a-a84a3efeb214-kube-api-access-8qhg5\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671136 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-config\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671153 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-fernet-keys\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671185 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-scripts\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671200 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671225 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-combined-ca-bundle\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671248 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671282 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brrz6\" (UniqueName: \"kubernetes.io/projected/2ac46337-c6f1-4c10-8a36-b9d36017920a-kube-api-access-brrz6\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.671300 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.679759 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-config-data\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.680552 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-scripts\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.683707 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-credential-keys\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.684339 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-combined-ca-bundle\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.684911 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-fernet-keys\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.723924 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qhg5\" (UniqueName: \"kubernetes.io/projected/158d259b-be19-41c8-af3a-a84a3efeb214-kube-api-access-8qhg5\") pod \"keystone-bootstrap-dbwnn\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.754620 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5ff79c7c45-k8zk9"] Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.768033 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.772365 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-config\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.772446 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.772484 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.772523 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brrz6\" (UniqueName: \"kubernetes.io/projected/2ac46337-c6f1-4c10-8a36-b9d36017920a-kube-api-access-brrz6\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.772542 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.772585 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.773619 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.773687 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.774216 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.774640 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-config\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.774800 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.782637 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.782911 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.798995 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-fmwfq" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.812040 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5ff79c7c45-k8zk9"] Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.828059 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.837239 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brrz6\" (UniqueName: \"kubernetes.io/projected/2ac46337-c6f1-4c10-8a36-b9d36017920a-kube-api-access-brrz6\") pod \"dnsmasq-dns-bbf5cc879-z42h9\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.840753 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.878824 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-scripts\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.878874 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-horizon-secret-key\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.878908 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-config-data\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.878944 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk5m5\" (UniqueName: \"kubernetes.io/projected/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-kube-api-access-xk5m5\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.878977 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-logs\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.895587 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bpbdz"] Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.896689 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.922219 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xf4w5" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.922404 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.922515 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.923513 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bpbdz"] Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.987306 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-scripts\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.987350 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-horizon-secret-key\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.987576 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-config-data\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.987617 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk5m5\" (UniqueName: \"kubernetes.io/projected/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-kube-api-access-xk5m5\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.987652 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-logs\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.991408 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-logs\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.993792 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-config-data\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:50 crc kubenswrapper[4676]: I0124 00:21:50.996535 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-horizon-secret-key\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:50.998553 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-scripts\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.046440 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7klnv"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.047403 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.050016 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7klnv"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.070193 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-z42h9"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.070765 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.072364 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.072645 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vxll9" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.072743 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.084043 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk5m5\" (UniqueName: \"kubernetes.io/projected/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-kube-api-access-xk5m5\") pod \"horizon-5ff79c7c45-k8zk9\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.085742 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g5dnq"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.086597 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089511 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvjc\" (UniqueName: \"kubernetes.io/projected/04db8ba1-c0de-4985-8a48-ead625786472-kube-api-access-7qvjc\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089556 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-combined-ca-bundle\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089604 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-combined-ca-bundle\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089626 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-etc-machine-id\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089661 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-scripts\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089693 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-db-sync-config-data\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089715 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-combined-ca-bundle\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089742 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-config-data\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089764 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-config\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089810 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mc9c\" (UniqueName: \"kubernetes.io/projected/01c23d24-6fd8-4660-a0f7-815fdf508f5b-kube-api-access-4mc9c\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089848 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-db-sync-config-data\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.089876 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsgpg\" (UniqueName: \"kubernetes.io/projected/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-kube-api-access-rsgpg\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.096850 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.132021 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-96w42" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.132325 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.165006 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g5dnq"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195450 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-scripts\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195539 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-db-sync-config-data\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195575 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-combined-ca-bundle\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195608 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-config-data\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195641 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-config\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195694 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mc9c\" (UniqueName: \"kubernetes.io/projected/01c23d24-6fd8-4660-a0f7-815fdf508f5b-kube-api-access-4mc9c\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195746 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-db-sync-config-data\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195789 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsgpg\" (UniqueName: \"kubernetes.io/projected/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-kube-api-access-rsgpg\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195847 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qvjc\" (UniqueName: \"kubernetes.io/projected/04db8ba1-c0de-4985-8a48-ead625786472-kube-api-access-7qvjc\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195877 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-combined-ca-bundle\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.195927 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-combined-ca-bundle\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.196069 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-etc-machine-id\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.196267 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-etc-machine-id\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.225588 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-config-data\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.252524 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-db-sync-config-data\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.253840 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-config\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.254407 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-db-sync-config-data\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.254770 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-scripts\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.254894 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-combined-ca-bundle\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.305048 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-combined-ca-bundle\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.307486 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-combined-ca-bundle\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.322586 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hpwcp"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.325108 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.345866 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsgpg\" (UniqueName: \"kubernetes.io/projected/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-kube-api-access-rsgpg\") pod \"cinder-db-sync-7klnv\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.377616 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7klnv" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.380195 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qvjc\" (UniqueName: \"kubernetes.io/projected/04db8ba1-c0de-4985-8a48-ead625786472-kube-api-access-7qvjc\") pod \"barbican-db-sync-g5dnq\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.389861 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.393749 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.421899 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.422079 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.424779 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hpwcp"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.431231 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.431290 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.431429 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rjk5\" (UniqueName: \"kubernetes.io/projected/2417692e-7845-446e-a60a-bb3842851912-kube-api-access-5rjk5\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.431449 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.431484 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-config\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.431504 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.459764 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.460091 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.479074 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4kdvp"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.480335 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.495439 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6qnnp" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.495826 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.495996 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.496310 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-694b9fc4d7-xkkn8"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.498052 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.515730 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.517078 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.519418 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s5zsb" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.519625 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 24 00:21:51 crc kubenswrapper[4676]: I0124 00:21:51.519762 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.523808 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.530282 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4kdvp"] Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536179 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c17fc5e6-983e-4678-b22c-c68686271163-logs\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536243 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-config-data\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536263 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-config-data\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536294 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536336 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rjk5\" (UniqueName: \"kubernetes.io/projected/2417692e-7845-446e-a60a-bb3842851912-kube-api-access-5rjk5\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536355 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tljs7\" (UniqueName: \"kubernetes.io/projected/54d56910-d4b7-45b1-8699-5af7eaa29b96-kube-api-access-tljs7\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536371 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536409 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-scripts\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536423 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv7wz\" (UniqueName: \"kubernetes.io/projected/c17fc5e6-983e-4678-b22c-c68686271163-kube-api-access-nv7wz\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536439 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-log-httpd\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536453 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-run-httpd\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536467 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-scripts\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536486 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-config\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536507 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536528 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-combined-ca-bundle\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536549 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536580 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.536606 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.537345 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.538081 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.538585 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-config\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.542158 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.550898 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.560583 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-694b9fc4d7-xkkn8"] Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.570925 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mc9c\" (UniqueName: \"kubernetes.io/projected/01c23d24-6fd8-4660-a0f7-815fdf508f5b-kube-api-access-4mc9c\") pod \"neutron-db-sync-bpbdz\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.571236 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.614239 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rjk5\" (UniqueName: \"kubernetes.io/projected/2417692e-7845-446e-a60a-bb3842851912-kube-api-access-5rjk5\") pod \"dnsmasq-dns-56df8fb6b7-hpwcp\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644417 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644459 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644502 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-logs\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644519 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c17fc5e6-983e-4678-b22c-c68686271163-logs\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644537 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644560 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644577 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd7b43d5-aa70-4c36-b1bb-4504bd262439-logs\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644610 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644636 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-config-data\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644653 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-config-data\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644673 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-487qg\" (UniqueName: \"kubernetes.io/projected/dd7b43d5-aa70-4c36-b1bb-4504bd262439-kube-api-access-487qg\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644696 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644717 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644735 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644760 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-config-data\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644776 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tljs7\" (UniqueName: \"kubernetes.io/projected/54d56910-d4b7-45b1-8699-5af7eaa29b96-kube-api-access-tljs7\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644794 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xjb8\" (UniqueName: \"kubernetes.io/projected/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-kube-api-access-8xjb8\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644814 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-scripts\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644832 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-scripts\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644847 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv7wz\" (UniqueName: \"kubernetes.io/projected/c17fc5e6-983e-4678-b22c-c68686271163-kube-api-access-nv7wz\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644861 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-log-httpd\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644875 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-scripts\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644889 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-run-httpd\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644914 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd7b43d5-aa70-4c36-b1bb-4504bd262439-horizon-secret-key\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644919 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c17fc5e6-983e-4678-b22c-c68686271163-logs\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.644939 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-combined-ca-bundle\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.650114 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-log-httpd\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.657390 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-scripts\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.661955 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-run-httpd\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.695336 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.695813 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tljs7\" (UniqueName: \"kubernetes.io/projected/54d56910-d4b7-45b1-8699-5af7eaa29b96-kube-api-access-tljs7\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.707082 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-config-data\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.707929 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-scripts\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.707958 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.708327 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-combined-ca-bundle\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.721435 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.753991 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv7wz\" (UniqueName: \"kubernetes.io/projected/c17fc5e6-983e-4678-b22c-c68686271163-kube-api-access-nv7wz\") pod \"placement-db-sync-4kdvp\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.760794 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd7b43d5-aa70-4c36-b1bb-4504bd262439-horizon-secret-key\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761269 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761313 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-logs\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761334 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761356 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761370 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd7b43d5-aa70-4c36-b1bb-4504bd262439-logs\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761402 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761446 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-487qg\" (UniqueName: \"kubernetes.io/projected/dd7b43d5-aa70-4c36-b1bb-4504bd262439-kube-api-access-487qg\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761468 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761491 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761514 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-config-data\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761532 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xjb8\" (UniqueName: \"kubernetes.io/projected/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-kube-api-access-8xjb8\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.761551 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-scripts\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.779542 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-logs\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.779750 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.796006 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd7b43d5-aa70-4c36-b1bb-4504bd262439-logs\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.796774 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.801138 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-config-data\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.801537 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-scripts\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.817894 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.818179 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd7b43d5-aa70-4c36-b1bb-4504bd262439-horizon-secret-key\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.819238 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-config-data\") pod \"ceilometer-0\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.827423 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.866922 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-487qg\" (UniqueName: \"kubernetes.io/projected/dd7b43d5-aa70-4c36-b1bb-4504bd262439-kube-api-access-487qg\") pod \"horizon-694b9fc4d7-xkkn8\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.898880 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.898943 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.899299 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.902399 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dbwnn"] Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.925553 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xjb8\" (UniqueName: \"kubernetes.io/projected/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-kube-api-access-8xjb8\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.928095 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4kdvp" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.962259 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:51.966891 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.013969 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.015624 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.031746 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.031917 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.059004 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.073959 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.092611 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.092786 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-662sq\" (UniqueName: \"kubernetes.io/projected/f5867251-cbde-4d93-8900-b134f99aae39-kube-api-access-662sq\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.092970 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.093020 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.093050 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.093190 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.093223 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.093334 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.194708 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.195252 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.195318 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-662sq\" (UniqueName: \"kubernetes.io/projected/f5867251-cbde-4d93-8900-b134f99aae39-kube-api-access-662sq\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.195433 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.195482 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.195505 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.195566 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.195590 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.199808 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dbwnn" event={"ID":"158d259b-be19-41c8-af3a-a84a3efeb214","Type":"ContainerStarted","Data":"7de28fe104be408633c60844298971c85a3dc1df0dbdf8b08da673fac8aac81a"} Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.199985 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.200205 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.200707 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.209061 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.217144 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.218539 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.220235 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.225476 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-662sq\" (UniqueName: \"kubernetes.io/projected/f5867251-cbde-4d93-8900-b134f99aae39-kube-api-access-662sq\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.252726 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.283980 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:21:52 crc kubenswrapper[4676]: I0124 00:21:52.378036 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.127858 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5ff79c7c45-k8zk9"] Jan 24 00:21:53 crc kubenswrapper[4676]: W0124 00:21:53.134222 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ff71896_166b_4b5d_b11e_ebfb7d340e3d.slice/crio-dbda9f3c357ed36f88cb2516c4a0699d62520eb292d651052ab02beb2599dbcc WatchSource:0}: Error finding container dbda9f3c357ed36f88cb2516c4a0699d62520eb292d651052ab02beb2599dbcc: Status 404 returned error can't find the container with id dbda9f3c357ed36f88cb2516c4a0699d62520eb292d651052ab02beb2599dbcc Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.146979 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-z42h9"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.227835 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dbwnn" event={"ID":"158d259b-be19-41c8-af3a-a84a3efeb214","Type":"ContainerStarted","Data":"51869c6b7eeadd27a59a7b34889ba0093a949ad78c0444eec3606959ac5ba339"} Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.270720 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" event={"ID":"2ac46337-c6f1-4c10-8a36-b9d36017920a","Type":"ContainerStarted","Data":"69a7245a7f2fdb37f96d67c0c770f9ecb4530ed0f3864f0e13366799bfec1097"} Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.275213 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5ff79c7c45-k8zk9" event={"ID":"4ff71896-166b-4b5d-b11e-ebfb7d340e3d","Type":"ContainerStarted","Data":"dbda9f3c357ed36f88cb2516c4a0699d62520eb292d651052ab02beb2599dbcc"} Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.275403 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.286218 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dbwnn" podStartSLOduration=3.286200661 podStartE2EDuration="3.286200661s" podCreationTimestamp="2026-01-24 00:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:53.283613362 +0000 UTC m=+1097.313584363" watchObservedRunningTime="2026-01-24 00:21:53.286200661 +0000 UTC m=+1097.316171652" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.354848 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5ff79c7c45-k8zk9"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.414162 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.437483 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-797567cdb9-7p8fk"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.438885 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.447731 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-797567cdb9-7p8fk"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.535431 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.538227 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-logs\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.538329 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-horizon-secret-key\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.538427 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-config-data\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.538484 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-scripts\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.538515 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn27t\" (UniqueName: \"kubernetes.io/projected/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-kube-api-access-hn27t\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.543646 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-694b9fc4d7-xkkn8"] Jan 24 00:21:53 crc kubenswrapper[4676]: W0124 00:21:53.559480 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd7b43d5_aa70_4c36_b1bb_4504bd262439.slice/crio-4152f9721423cdfc38209333d080f3609332e5c5cc25d9ded0d737267f382359 WatchSource:0}: Error finding container 4152f9721423cdfc38209333d080f3609332e5c5cc25d9ded0d737267f382359: Status 404 returned error can't find the container with id 4152f9721423cdfc38209333d080f3609332e5c5cc25d9ded0d737267f382359 Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.593643 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bpbdz"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.616063 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7klnv"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.621588 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hpwcp"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.639965 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-config-data\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.640018 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-scripts\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.646536 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn27t\" (UniqueName: \"kubernetes.io/projected/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-kube-api-access-hn27t\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.646615 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-logs\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.646833 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-horizon-secret-key\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.647893 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-logs\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.656429 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-scripts\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.658978 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-config-data\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.670794 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.695536 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-horizon-secret-key\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.729978 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn27t\" (UniqueName: \"kubernetes.io/projected/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-kube-api-access-hn27t\") pod \"horizon-797567cdb9-7p8fk\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.772697 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4kdvp"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.779696 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.802193 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g5dnq"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.842949 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:21:53 crc kubenswrapper[4676]: I0124 00:21:53.882424 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.297298 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-694b9fc4d7-xkkn8" event={"ID":"dd7b43d5-aa70-4c36-b1bb-4504bd262439","Type":"ContainerStarted","Data":"4152f9721423cdfc38209333d080f3609332e5c5cc25d9ded0d737267f382359"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.300436 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7klnv" event={"ID":"d1163cc6-7ce1-4f2b-9d4a-f3e215177842","Type":"ContainerStarted","Data":"70e37739b2a173b291b1d289a2d2839fa4f44eb620a1fa26acee32cd0b9f1716"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.302006 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe9c2a69-3ecd-49b3-87d1-5c42a90af428","Type":"ContainerStarted","Data":"1824159d235b75a9dce59175487613a8beccbee50938b73c47eb45eda971ba07"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.303977 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bpbdz" event={"ID":"01c23d24-6fd8-4660-a0f7-815fdf508f5b","Type":"ContainerStarted","Data":"192824bb8ad730af6c36357704c067406da5b951731ba1841c964e1a78d10bc2"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.303999 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bpbdz" event={"ID":"01c23d24-6fd8-4660-a0f7-815fdf508f5b","Type":"ContainerStarted","Data":"21e76890e9eced941fb1d30013143862a35607ef8c3138b99e6cd707709e85c8"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.306037 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54d56910-d4b7-45b1-8699-5af7eaa29b96","Type":"ContainerStarted","Data":"3a171779e2acfaaedcae6bf53bf013bc9cc89a659479afefe0a8349e50f12f73"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.310496 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4kdvp" event={"ID":"c17fc5e6-983e-4678-b22c-c68686271163","Type":"ContainerStarted","Data":"cb82773a377e4565776a6be3bc254bf675dcbd0497f37c26a350eb27b7526931"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.311691 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5867251-cbde-4d93-8900-b134f99aae39","Type":"ContainerStarted","Data":"8f9b82184da00b28ad31edc75c3a11bd122ec52ae047c31171199a425be304fc"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.313071 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g5dnq" event={"ID":"04db8ba1-c0de-4985-8a48-ead625786472","Type":"ContainerStarted","Data":"534f710da5fb3c7907cdb2b5714b0d256a8306b5476b74f728dc51abaa10f9c0"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.318946 4676 generic.go:334] "Generic (PLEG): container finished" podID="2ac46337-c6f1-4c10-8a36-b9d36017920a" containerID="159d27826fe1136c6a706b48745620d8405fe60e7db1572dc165254bd5cc4a58" exitCode=0 Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.319216 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" event={"ID":"2ac46337-c6f1-4c10-8a36-b9d36017920a","Type":"ContainerDied","Data":"159d27826fe1136c6a706b48745620d8405fe60e7db1572dc165254bd5cc4a58"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.321129 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bpbdz" podStartSLOduration=4.321119793 podStartE2EDuration="4.321119793s" podCreationTimestamp="2026-01-24 00:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:54.314704717 +0000 UTC m=+1098.344675718" watchObservedRunningTime="2026-01-24 00:21:54.321119793 +0000 UTC m=+1098.351090794" Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.328073 4676 generic.go:334] "Generic (PLEG): container finished" podID="2417692e-7845-446e-a60a-bb3842851912" containerID="4c0f072637105b041cb28fbed44761b77e903f1e8315f62df0bde2e50a63a7fa" exitCode=0 Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.329186 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" event={"ID":"2417692e-7845-446e-a60a-bb3842851912","Type":"ContainerDied","Data":"4c0f072637105b041cb28fbed44761b77e903f1e8315f62df0bde2e50a63a7fa"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.329217 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" event={"ID":"2417692e-7845-446e-a60a-bb3842851912","Type":"ContainerStarted","Data":"a17194b7f35eb9f8070fa9acebc74f7ada011686f763d785550450b94c66597c"} Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.424112 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-797567cdb9-7p8fk"] Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.845220 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.983984 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-svc\") pod \"2ac46337-c6f1-4c10-8a36-b9d36017920a\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.984362 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-swift-storage-0\") pod \"2ac46337-c6f1-4c10-8a36-b9d36017920a\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.984609 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brrz6\" (UniqueName: \"kubernetes.io/projected/2ac46337-c6f1-4c10-8a36-b9d36017920a-kube-api-access-brrz6\") pod \"2ac46337-c6f1-4c10-8a36-b9d36017920a\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.984659 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-sb\") pod \"2ac46337-c6f1-4c10-8a36-b9d36017920a\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.984685 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-config\") pod \"2ac46337-c6f1-4c10-8a36-b9d36017920a\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " Jan 24 00:21:54 crc kubenswrapper[4676]: I0124 00:21:54.984718 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-nb\") pod \"2ac46337-c6f1-4c10-8a36-b9d36017920a\" (UID: \"2ac46337-c6f1-4c10-8a36-b9d36017920a\") " Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.026111 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-config" (OuterVolumeSpecName: "config") pod "2ac46337-c6f1-4c10-8a36-b9d36017920a" (UID: "2ac46337-c6f1-4c10-8a36-b9d36017920a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.028440 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ac46337-c6f1-4c10-8a36-b9d36017920a" (UID: "2ac46337-c6f1-4c10-8a36-b9d36017920a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.033734 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac46337-c6f1-4c10-8a36-b9d36017920a-kube-api-access-brrz6" (OuterVolumeSpecName: "kube-api-access-brrz6") pod "2ac46337-c6f1-4c10-8a36-b9d36017920a" (UID: "2ac46337-c6f1-4c10-8a36-b9d36017920a"). InnerVolumeSpecName "kube-api-access-brrz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.050214 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ac46337-c6f1-4c10-8a36-b9d36017920a" (UID: "2ac46337-c6f1-4c10-8a36-b9d36017920a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.073027 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ac46337-c6f1-4c10-8a36-b9d36017920a" (UID: "2ac46337-c6f1-4c10-8a36-b9d36017920a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.088435 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brrz6\" (UniqueName: \"kubernetes.io/projected/2ac46337-c6f1-4c10-8a36-b9d36017920a-kube-api-access-brrz6\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.088473 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.088483 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.088492 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.088500 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.101271 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ac46337-c6f1-4c10-8a36-b9d36017920a" (UID: "2ac46337-c6f1-4c10-8a36-b9d36017920a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.189802 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ac46337-c6f1-4c10-8a36-b9d36017920a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.344920 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" event={"ID":"2ac46337-c6f1-4c10-8a36-b9d36017920a","Type":"ContainerDied","Data":"69a7245a7f2fdb37f96d67c0c770f9ecb4530ed0f3864f0e13366799bfec1097"} Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.345187 4676 scope.go:117] "RemoveContainer" containerID="159d27826fe1136c6a706b48745620d8405fe60e7db1572dc165254bd5cc4a58" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.345280 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-z42h9" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.350338 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe9c2a69-3ecd-49b3-87d1-5c42a90af428","Type":"ContainerStarted","Data":"9b6d1f8ce083f680faa19785ed5bbaf58ec2053f4ead9760d1b30372e7ebb323"} Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.357190 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797567cdb9-7p8fk" event={"ID":"0b297a5e-7f42-435d-8c8e-fa8d113f8d74","Type":"ContainerStarted","Data":"d4a299117ca872f2981d137eaebb140c612fdabbc4e3e470ef6e031c80a0bce6"} Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.366594 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" event={"ID":"2417692e-7845-446e-a60a-bb3842851912","Type":"ContainerStarted","Data":"d6cc348312351f0aa6a3a89dece34eea57cb75b4f69f18c04d668c9984cdefc4"} Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.367183 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.389583 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5867251-cbde-4d93-8900-b134f99aae39","Type":"ContainerStarted","Data":"3f5282d9d816b83ab02ae74e2aab26c447e6e2a07343b6a6dc5c5f59c5e32711"} Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.432686 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-z42h9"] Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.444786 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-z42h9"] Jan 24 00:21:55 crc kubenswrapper[4676]: I0124 00:21:55.446559 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" podStartSLOduration=4.446542696 podStartE2EDuration="4.446542696s" podCreationTimestamp="2026-01-24 00:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:55.420280882 +0000 UTC m=+1099.450251873" watchObservedRunningTime="2026-01-24 00:21:55.446542696 +0000 UTC m=+1099.476513697" Jan 24 00:21:56 crc kubenswrapper[4676]: I0124 00:21:56.293256 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac46337-c6f1-4c10-8a36-b9d36017920a" path="/var/lib/kubelet/pods/2ac46337-c6f1-4c10-8a36-b9d36017920a/volumes" Jan 24 00:21:57 crc kubenswrapper[4676]: I0124 00:21:57.432947 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe9c2a69-3ecd-49b3-87d1-5c42a90af428","Type":"ContainerStarted","Data":"3b215804cc138ac23930129381e72626b89a03c7f85d05eb606c6576d28a097c"} Jan 24 00:21:57 crc kubenswrapper[4676]: I0124 00:21:57.433078 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerName="glance-log" containerID="cri-o://9b6d1f8ce083f680faa19785ed5bbaf58ec2053f4ead9760d1b30372e7ebb323" gracePeriod=30 Jan 24 00:21:57 crc kubenswrapper[4676]: I0124 00:21:57.433343 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerName="glance-httpd" containerID="cri-o://3b215804cc138ac23930129381e72626b89a03c7f85d05eb606c6576d28a097c" gracePeriod=30 Jan 24 00:21:57 crc kubenswrapper[4676]: I0124 00:21:57.435840 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5867251-cbde-4d93-8900-b134f99aae39","Type":"ContainerStarted","Data":"bc729c98cba3454809e024f8daa025b45fade86eac51870c7e12abcef362e249"} Jan 24 00:21:57 crc kubenswrapper[4676]: I0124 00:21:57.435983 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f5867251-cbde-4d93-8900-b134f99aae39" containerName="glance-log" containerID="cri-o://3f5282d9d816b83ab02ae74e2aab26c447e6e2a07343b6a6dc5c5f59c5e32711" gracePeriod=30 Jan 24 00:21:57 crc kubenswrapper[4676]: I0124 00:21:57.435995 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f5867251-cbde-4d93-8900-b134f99aae39" containerName="glance-httpd" containerID="cri-o://bc729c98cba3454809e024f8daa025b45fade86eac51870c7e12abcef362e249" gracePeriod=30 Jan 24 00:21:57 crc kubenswrapper[4676]: I0124 00:21:57.453869 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.453853955 podStartE2EDuration="6.453853955s" podCreationTimestamp="2026-01-24 00:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:57.451443711 +0000 UTC m=+1101.481414712" watchObservedRunningTime="2026-01-24 00:21:57.453853955 +0000 UTC m=+1101.483824956" Jan 24 00:21:58 crc kubenswrapper[4676]: I0124 00:21:58.466887 4676 generic.go:334] "Generic (PLEG): container finished" podID="f5867251-cbde-4d93-8900-b134f99aae39" containerID="bc729c98cba3454809e024f8daa025b45fade86eac51870c7e12abcef362e249" exitCode=0 Jan 24 00:21:58 crc kubenswrapper[4676]: I0124 00:21:58.467191 4676 generic.go:334] "Generic (PLEG): container finished" podID="f5867251-cbde-4d93-8900-b134f99aae39" containerID="3f5282d9d816b83ab02ae74e2aab26c447e6e2a07343b6a6dc5c5f59c5e32711" exitCode=143 Jan 24 00:21:58 crc kubenswrapper[4676]: I0124 00:21:58.467083 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5867251-cbde-4d93-8900-b134f99aae39","Type":"ContainerDied","Data":"bc729c98cba3454809e024f8daa025b45fade86eac51870c7e12abcef362e249"} Jan 24 00:21:58 crc kubenswrapper[4676]: I0124 00:21:58.467271 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5867251-cbde-4d93-8900-b134f99aae39","Type":"ContainerDied","Data":"3f5282d9d816b83ab02ae74e2aab26c447e6e2a07343b6a6dc5c5f59c5e32711"} Jan 24 00:21:58 crc kubenswrapper[4676]: I0124 00:21:58.477067 4676 generic.go:334] "Generic (PLEG): container finished" podID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerID="3b215804cc138ac23930129381e72626b89a03c7f85d05eb606c6576d28a097c" exitCode=0 Jan 24 00:21:58 crc kubenswrapper[4676]: I0124 00:21:58.477106 4676 generic.go:334] "Generic (PLEG): container finished" podID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerID="9b6d1f8ce083f680faa19785ed5bbaf58ec2053f4ead9760d1b30372e7ebb323" exitCode=143 Jan 24 00:21:58 crc kubenswrapper[4676]: I0124 00:21:58.477132 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe9c2a69-3ecd-49b3-87d1-5c42a90af428","Type":"ContainerDied","Data":"3b215804cc138ac23930129381e72626b89a03c7f85d05eb606c6576d28a097c"} Jan 24 00:21:58 crc kubenswrapper[4676]: I0124 00:21:58.477170 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe9c2a69-3ecd-49b3-87d1-5c42a90af428","Type":"ContainerDied","Data":"9b6d1f8ce083f680faa19785ed5bbaf58ec2053f4ead9760d1b30372e7ebb323"} Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.487149 4676 generic.go:334] "Generic (PLEG): container finished" podID="158d259b-be19-41c8-af3a-a84a3efeb214" containerID="51869c6b7eeadd27a59a7b34889ba0093a949ad78c0444eec3606959ac5ba339" exitCode=0 Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.487470 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dbwnn" event={"ID":"158d259b-be19-41c8-af3a-a84a3efeb214","Type":"ContainerDied","Data":"51869c6b7eeadd27a59a7b34889ba0093a949ad78c0444eec3606959ac5ba339"} Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.507591 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.507573425 podStartE2EDuration="9.507573425s" podCreationTimestamp="2026-01-24 00:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:21:57.487760323 +0000 UTC m=+1101.517731324" watchObservedRunningTime="2026-01-24 00:21:59.507573425 +0000 UTC m=+1103.537544426" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.777564 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-694b9fc4d7-xkkn8"] Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.809764 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bf988b4bd-ls7hp"] Jan 24 00:21:59 crc kubenswrapper[4676]: E0124 00:21:59.813322 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac46337-c6f1-4c10-8a36-b9d36017920a" containerName="init" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.814459 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac46337-c6f1-4c10-8a36-b9d36017920a" containerName="init" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.815361 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac46337-c6f1-4c10-8a36-b9d36017920a" containerName="init" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.816302 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.824292 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.851934 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf988b4bd-ls7hp"] Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.949348 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-797567cdb9-7p8fk"] Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.988434 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f876ddf46-fs7qv"] Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.990301 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.993774 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwffk\" (UniqueName: \"kubernetes.io/projected/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-kube-api-access-cwffk\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.993848 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-secret-key\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.993877 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-config-data\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.993914 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-tls-certs\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.994439 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-logs\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.994505 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-combined-ca-bundle\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:21:59 crc kubenswrapper[4676]: I0124 00:21:59.994569 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-scripts\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.006187 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f876ddf46-fs7qv"] Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.095794 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-horizon-secret-key\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.095835 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-secret-key\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.095861 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-config-data\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.095889 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-tls-certs\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.095905 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-logs\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.095925 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99tbt\" (UniqueName: \"kubernetes.io/projected/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-kube-api-access-99tbt\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.095975 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-logs\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.095996 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-combined-ca-bundle\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.096011 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-scripts\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.096031 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-horizon-tls-certs\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.096052 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-config-data\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.096067 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-scripts\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.096093 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-combined-ca-bundle\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.096114 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwffk\" (UniqueName: \"kubernetes.io/projected/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-kube-api-access-cwffk\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.101526 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-combined-ca-bundle\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.102694 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-config-data\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.105716 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-tls-certs\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.105843 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-logs\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.106021 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-scripts\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.111768 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwffk\" (UniqueName: \"kubernetes.io/projected/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-kube-api-access-cwffk\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.115961 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-secret-key\") pod \"horizon-bf988b4bd-ls7hp\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.171420 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.198036 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-config-data\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.198097 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-combined-ca-bundle\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.198167 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-horizon-secret-key\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.198210 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-logs\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.198253 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99tbt\" (UniqueName: \"kubernetes.io/projected/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-kube-api-access-99tbt\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.198338 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-scripts\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.198358 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-horizon-tls-certs\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.199716 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-config-data\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.202225 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-combined-ca-bundle\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.202251 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-logs\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.207831 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-horizon-secret-key\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.212250 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-scripts\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.213511 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-horizon-tls-certs\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.213851 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99tbt\" (UniqueName: \"kubernetes.io/projected/ac7dce6b-3bd9-4ad9-9485-83d9384b8bad-kube-api-access-99tbt\") pod \"horizon-f876ddf46-fs7qv\" (UID: \"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad\") " pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:00 crc kubenswrapper[4676]: I0124 00:22:00.320986 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:01 crc kubenswrapper[4676]: I0124 00:22:01.697924 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:22:01 crc kubenswrapper[4676]: I0124 00:22:01.815146 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-p2f7j"] Jan 24 00:22:01 crc kubenswrapper[4676]: I0124 00:22:01.815404 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="dnsmasq-dns" containerID="cri-o://eecf36d183fa31e03de5954a2aec194cfc8588694d23fe41c8598d42bf4437a3" gracePeriod=10 Jan 24 00:22:03 crc kubenswrapper[4676]: I0124 00:22:03.522743 4676 generic.go:334] "Generic (PLEG): container finished" podID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerID="eecf36d183fa31e03de5954a2aec194cfc8588694d23fe41c8598d42bf4437a3" exitCode=0 Jan 24 00:22:03 crc kubenswrapper[4676]: I0124 00:22:03.522819 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" event={"ID":"92c7f906-f96b-4d55-b771-6d6425b32b85","Type":"ContainerDied","Data":"eecf36d183fa31e03de5954a2aec194cfc8588694d23fe41c8598d42bf4437a3"} Jan 24 00:22:04 crc kubenswrapper[4676]: I0124 00:22:04.543335 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.777461 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.780972 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.782338 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.851568 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-combined-ca-bundle\") pod \"f5867251-cbde-4d93-8900-b134f99aae39\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.851974 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-combined-ca-bundle\") pod \"158d259b-be19-41c8-af3a-a84a3efeb214\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852010 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-logs\") pod \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852097 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852126 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-httpd-run\") pod \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852157 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-combined-ca-bundle\") pod \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852185 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-logs\") pod \"f5867251-cbde-4d93-8900-b134f99aae39\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852223 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-public-tls-certs\") pod \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852257 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-scripts\") pod \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852282 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-config-data\") pod \"f5867251-cbde-4d93-8900-b134f99aae39\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852312 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-scripts\") pod \"f5867251-cbde-4d93-8900-b134f99aae39\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852356 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-scripts\") pod \"158d259b-be19-41c8-af3a-a84a3efeb214\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852425 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xjb8\" (UniqueName: \"kubernetes.io/projected/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-kube-api-access-8xjb8\") pod \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852456 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-fernet-keys\") pod \"158d259b-be19-41c8-af3a-a84a3efeb214\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852484 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-config-data\") pod \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\" (UID: \"fe9c2a69-3ecd-49b3-87d1-5c42a90af428\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852517 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qhg5\" (UniqueName: \"kubernetes.io/projected/158d259b-be19-41c8-af3a-a84a3efeb214-kube-api-access-8qhg5\") pod \"158d259b-be19-41c8-af3a-a84a3efeb214\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852561 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-internal-tls-certs\") pod \"f5867251-cbde-4d93-8900-b134f99aae39\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852584 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-662sq\" (UniqueName: \"kubernetes.io/projected/f5867251-cbde-4d93-8900-b134f99aae39-kube-api-access-662sq\") pod \"f5867251-cbde-4d93-8900-b134f99aae39\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852615 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-credential-keys\") pod \"158d259b-be19-41c8-af3a-a84a3efeb214\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852638 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-config-data\") pod \"158d259b-be19-41c8-af3a-a84a3efeb214\" (UID: \"158d259b-be19-41c8-af3a-a84a3efeb214\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852657 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f5867251-cbde-4d93-8900-b134f99aae39\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.852679 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-httpd-run\") pod \"f5867251-cbde-4d93-8900-b134f99aae39\" (UID: \"f5867251-cbde-4d93-8900-b134f99aae39\") " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.854080 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f5867251-cbde-4d93-8900-b134f99aae39" (UID: "f5867251-cbde-4d93-8900-b134f99aae39"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.874722 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "158d259b-be19-41c8-af3a-a84a3efeb214" (UID: "158d259b-be19-41c8-af3a-a84a3efeb214"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.874894 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5867251-cbde-4d93-8900-b134f99aae39-kube-api-access-662sq" (OuterVolumeSpecName: "kube-api-access-662sq") pod "f5867251-cbde-4d93-8900-b134f99aae39" (UID: "f5867251-cbde-4d93-8900-b134f99aae39"). InnerVolumeSpecName "kube-api-access-662sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.876817 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "158d259b-be19-41c8-af3a-a84a3efeb214" (UID: "158d259b-be19-41c8-af3a-a84a3efeb214"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.880155 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-logs" (OuterVolumeSpecName: "logs") pod "f5867251-cbde-4d93-8900-b134f99aae39" (UID: "f5867251-cbde-4d93-8900-b134f99aae39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.880446 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-logs" (OuterVolumeSpecName: "logs") pod "fe9c2a69-3ecd-49b3-87d1-5c42a90af428" (UID: "fe9c2a69-3ecd-49b3-87d1-5c42a90af428"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.881609 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fe9c2a69-3ecd-49b3-87d1-5c42a90af428" (UID: "fe9c2a69-3ecd-49b3-87d1-5c42a90af428"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.888732 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "fe9c2a69-3ecd-49b3-87d1-5c42a90af428" (UID: "fe9c2a69-3ecd-49b3-87d1-5c42a90af428"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.888822 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "f5867251-cbde-4d93-8900-b134f99aae39" (UID: "f5867251-cbde-4d93-8900-b134f99aae39"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.889512 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-scripts" (OuterVolumeSpecName: "scripts") pod "158d259b-be19-41c8-af3a-a84a3efeb214" (UID: "158d259b-be19-41c8-af3a-a84a3efeb214"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.893319 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-kube-api-access-8xjb8" (OuterVolumeSpecName: "kube-api-access-8xjb8") pod "fe9c2a69-3ecd-49b3-87d1-5c42a90af428" (UID: "fe9c2a69-3ecd-49b3-87d1-5c42a90af428"). InnerVolumeSpecName "kube-api-access-8xjb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.893443 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158d259b-be19-41c8-af3a-a84a3efeb214-kube-api-access-8qhg5" (OuterVolumeSpecName: "kube-api-access-8qhg5") pod "158d259b-be19-41c8-af3a-a84a3efeb214" (UID: "158d259b-be19-41c8-af3a-a84a3efeb214"). InnerVolumeSpecName "kube-api-access-8qhg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.898531 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-scripts" (OuterVolumeSpecName: "scripts") pod "fe9c2a69-3ecd-49b3-87d1-5c42a90af428" (UID: "fe9c2a69-3ecd-49b3-87d1-5c42a90af428"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.911559 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-scripts" (OuterVolumeSpecName: "scripts") pod "f5867251-cbde-4d93-8900-b134f99aae39" (UID: "f5867251-cbde-4d93-8900-b134f99aae39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.941987 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "158d259b-be19-41c8-af3a-a84a3efeb214" (UID: "158d259b-be19-41c8-af3a-a84a3efeb214"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.946908 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5867251-cbde-4d93-8900-b134f99aae39" (UID: "f5867251-cbde-4d93-8900-b134f99aae39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.954951 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.954977 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.954987 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955008 4676 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955017 4676 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955102 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955114 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955122 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955129 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955138 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xjb8\" (UniqueName: \"kubernetes.io/projected/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-kube-api-access-8xjb8\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955148 4676 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955157 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qhg5\" (UniqueName: \"kubernetes.io/projected/158d259b-be19-41c8-af3a-a84a3efeb214-kube-api-access-8qhg5\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955177 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-662sq\" (UniqueName: \"kubernetes.io/projected/f5867251-cbde-4d93-8900-b134f99aae39-kube-api-access-662sq\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955185 4676 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955199 4676 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.955208 4676 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5867251-cbde-4d93-8900-b134f99aae39-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.956507 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe9c2a69-3ecd-49b3-87d1-5c42a90af428" (UID: "fe9c2a69-3ecd-49b3-87d1-5c42a90af428"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.959038 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-config-data" (OuterVolumeSpecName: "config-data") pod "158d259b-be19-41c8-af3a-a84a3efeb214" (UID: "158d259b-be19-41c8-af3a-a84a3efeb214"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.982256 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-config-data" (OuterVolumeSpecName: "config-data") pod "f5867251-cbde-4d93-8900-b134f99aae39" (UID: "f5867251-cbde-4d93-8900-b134f99aae39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.983696 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f5867251-cbde-4d93-8900-b134f99aae39" (UID: "f5867251-cbde-4d93-8900-b134f99aae39"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.993526 4676 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.994717 4676 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 00:22:05 crc kubenswrapper[4676]: I0124 00:22:05.997666 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-config-data" (OuterVolumeSpecName: "config-data") pod "fe9c2a69-3ecd-49b3-87d1-5c42a90af428" (UID: "fe9c2a69-3ecd-49b3-87d1-5c42a90af428"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.005743 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fe9c2a69-3ecd-49b3-87d1-5c42a90af428" (UID: "fe9c2a69-3ecd-49b3-87d1-5c42a90af428"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.057069 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.057110 4676 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.057131 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158d259b-be19-41c8-af3a-a84a3efeb214-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.057144 4676 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.057156 4676 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.057167 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.057181 4676 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe9c2a69-3ecd-49b3-87d1-5c42a90af428-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.057193 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5867251-cbde-4d93-8900-b134f99aae39-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.569187 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5867251-cbde-4d93-8900-b134f99aae39","Type":"ContainerDied","Data":"8f9b82184da00b28ad31edc75c3a11bd122ec52ae047c31171199a425be304fc"} Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.569235 4676 scope.go:117] "RemoveContainer" containerID="bc729c98cba3454809e024f8daa025b45fade86eac51870c7e12abcef362e249" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.569337 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.576300 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dbwnn" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.576464 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dbwnn" event={"ID":"158d259b-be19-41c8-af3a-a84a3efeb214","Type":"ContainerDied","Data":"7de28fe104be408633c60844298971c85a3dc1df0dbdf8b08da673fac8aac81a"} Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.576499 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7de28fe104be408633c60844298971c85a3dc1df0dbdf8b08da673fac8aac81a" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.581443 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe9c2a69-3ecd-49b3-87d1-5c42a90af428","Type":"ContainerDied","Data":"1824159d235b75a9dce59175487613a8beccbee50938b73c47eb45eda971ba07"} Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.581489 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.605486 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.680137 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.693497 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.709409 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.720881 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:22:06 crc kubenswrapper[4676]: E0124 00:22:06.721314 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerName="glance-log" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.721338 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerName="glance-log" Jan 24 00:22:06 crc kubenswrapper[4676]: E0124 00:22:06.721361 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158d259b-be19-41c8-af3a-a84a3efeb214" containerName="keystone-bootstrap" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.721370 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="158d259b-be19-41c8-af3a-a84a3efeb214" containerName="keystone-bootstrap" Jan 24 00:22:06 crc kubenswrapper[4676]: E0124 00:22:06.721402 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerName="glance-httpd" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.721411 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerName="glance-httpd" Jan 24 00:22:06 crc kubenswrapper[4676]: E0124 00:22:06.721427 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5867251-cbde-4d93-8900-b134f99aae39" containerName="glance-httpd" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.721435 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5867251-cbde-4d93-8900-b134f99aae39" containerName="glance-httpd" Jan 24 00:22:06 crc kubenswrapper[4676]: E0124 00:22:06.721445 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5867251-cbde-4d93-8900-b134f99aae39" containerName="glance-log" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.721453 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5867251-cbde-4d93-8900-b134f99aae39" containerName="glance-log" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.725012 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerName="glance-httpd" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.725047 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" containerName="glance-log" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.725067 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5867251-cbde-4d93-8900-b134f99aae39" containerName="glance-log" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.725076 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5867251-cbde-4d93-8900-b134f99aae39" containerName="glance-httpd" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.725088 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="158d259b-be19-41c8-af3a-a84a3efeb214" containerName="keystone-bootstrap" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.726218 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.728288 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.729005 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.729832 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.729968 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s5zsb" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.738337 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.739733 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.741474 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.741921 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.758617 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.782218 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.872902 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.872946 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blvls\" (UniqueName: \"kubernetes.io/projected/643e6d41-6572-4f21-8651-7f577967bfe8-kube-api-access-blvls\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.872975 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.872998 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-config-data\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873014 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-scripts\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873035 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-logs\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873057 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-config-data\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873082 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873098 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-logs\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873122 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fqw\" (UniqueName: \"kubernetes.io/projected/77ec0563-46f1-45b0-892b-352d088f9517-kube-api-access-49fqw\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873139 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-scripts\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873165 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873272 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873312 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873422 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.873587 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.943291 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dbwnn"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.950211 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dbwnn"] Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975090 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975142 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blvls\" (UniqueName: \"kubernetes.io/projected/643e6d41-6572-4f21-8651-7f577967bfe8-kube-api-access-blvls\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975174 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975195 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-config-data\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975210 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-scripts\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975229 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-logs\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975251 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-config-data\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975276 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975294 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-logs\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975317 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49fqw\" (UniqueName: \"kubernetes.io/projected/77ec0563-46f1-45b0-892b-352d088f9517-kube-api-access-49fqw\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975334 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-scripts\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975360 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975398 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975415 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975449 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.975473 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.981590 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.982012 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-logs\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.983242 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.983489 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.983619 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.983759 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.984071 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.985274 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-logs\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.989803 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-config-data\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.990720 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.994992 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-scripts\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.996554 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-scripts\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:06 crc kubenswrapper[4676]: I0124 00:22:06.997226 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-config-data\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.002671 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.002769 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49fqw\" (UniqueName: \"kubernetes.io/projected/77ec0563-46f1-45b0-892b-352d088f9517-kube-api-access-49fqw\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.010661 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blvls\" (UniqueName: \"kubernetes.io/projected/643e6d41-6572-4f21-8651-7f577967bfe8-kube-api-access-blvls\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.014555 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " pod="openstack/glance-default-external-api-0" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.033579 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.055827 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l49lm"] Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.056877 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.059873 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.060118 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.060263 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.060495 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fcgg9" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.060683 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.065862 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.077829 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.078050 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l49lm"] Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.178950 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-credential-keys\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.179001 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjz2c\" (UniqueName: \"kubernetes.io/projected/a7327fe7-179a-4492-8823-94b1067c17d4-kube-api-access-wjz2c\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.179023 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-scripts\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.179130 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-combined-ca-bundle\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.179183 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-config-data\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.179231 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-fernet-keys\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.286345 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-fernet-keys\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.286494 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-credential-keys\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.286524 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjz2c\" (UniqueName: \"kubernetes.io/projected/a7327fe7-179a-4492-8823-94b1067c17d4-kube-api-access-wjz2c\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.286546 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-scripts\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.286616 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-combined-ca-bundle\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.286654 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-config-data\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.290588 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-scripts\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.292082 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-config-data\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.302286 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-fernet-keys\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.302915 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-combined-ca-bundle\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.320266 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-credential-keys\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.339663 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjz2c\" (UniqueName: \"kubernetes.io/projected/a7327fe7-179a-4492-8823-94b1067c17d4-kube-api-access-wjz2c\") pod \"keystone-bootstrap-l49lm\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:07 crc kubenswrapper[4676]: I0124 00:22:07.434830 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:08 crc kubenswrapper[4676]: I0124 00:22:08.264348 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158d259b-be19-41c8-af3a-a84a3efeb214" path="/var/lib/kubelet/pods/158d259b-be19-41c8-af3a-a84a3efeb214/volumes" Jan 24 00:22:08 crc kubenswrapper[4676]: I0124 00:22:08.265446 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5867251-cbde-4d93-8900-b134f99aae39" path="/var/lib/kubelet/pods/f5867251-cbde-4d93-8900-b134f99aae39/volumes" Jan 24 00:22:08 crc kubenswrapper[4676]: I0124 00:22:08.266482 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe9c2a69-3ecd-49b3-87d1-5c42a90af428" path="/var/lib/kubelet/pods/fe9c2a69-3ecd-49b3-87d1-5c42a90af428/volumes" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.095736 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.096657 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbdh8bh5cdh64fh9bh6ch5f5h7dh5cbh5f7h86h5b6h8bh5cch567h68dh596h5ch548h666h5ch569hddh57fh5d6hcch684hc9h667h56dh659hd4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xk5m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5ff79c7c45-k8zk9_openstack(4ff71896-166b-4b5d-b11e-ebfb7d340e3d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.098101 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.098296 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67ch5fhb4h554h599h5f5h5d4h5d6h699hb4h5dh8dh5dfh79h74h5ch64ch666h668h665h5b9h68fh699h598h67bh655h569h55h584h665h649h87q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hn27t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-797567cdb9-7p8fk_openstack(0b297a5e-7f42-435d-8c8e-fa8d113f8d74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.100481 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5ff79c7c45-k8zk9" podUID="4ff71896-166b-4b5d-b11e-ebfb7d340e3d" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.100965 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-797567cdb9-7p8fk" podUID="0b297a5e-7f42-435d-8c8e-fa8d113f8d74" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.117100 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.117330 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h656h596hc8h5ddhbdh5d9h68h564h678h554h99hb5h68ch77h8h584h6bh5cbh679hddh5fbh64h5f6h58h669h5b8h5fbh688h59fh58fh9cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-487qg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-694b9fc4d7-xkkn8_openstack(dd7b43d5-aa70-4c36-b1bb-4504bd262439): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.119354 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-694b9fc4d7-xkkn8" podUID="dd7b43d5-aa70-4c36-b1bb-4504bd262439" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.567750 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.567907 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qvjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g5dnq_openstack(04db8ba1-c0de-4985-8a48-ead625786472): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.569088 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g5dnq" podUID="04db8ba1-c0de-4985-8a48-ead625786472" Jan 24 00:22:11 crc kubenswrapper[4676]: E0124 00:22:11.622678 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g5dnq" podUID="04db8ba1-c0de-4985-8a48-ead625786472" Jan 24 00:22:14 crc kubenswrapper[4676]: I0124 00:22:14.543192 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 24 00:22:19 crc kubenswrapper[4676]: I0124 00:22:19.544704 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 24 00:22:19 crc kubenswrapper[4676]: I0124 00:22:19.545529 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.327391 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.334170 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.343711 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.357640 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487124 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-swift-storage-0\") pod \"92c7f906-f96b-4d55-b771-6d6425b32b85\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487188 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-logs\") pod \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487215 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-config\") pod \"92c7f906-f96b-4d55-b771-6d6425b32b85\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487243 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-sb\") pod \"92c7f906-f96b-4d55-b771-6d6425b32b85\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487263 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd7b43d5-aa70-4c36-b1bb-4504bd262439-logs\") pod \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487316 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-scripts\") pod \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487366 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-config-data\") pod \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487413 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-svc\") pod \"92c7f906-f96b-4d55-b771-6d6425b32b85\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487467 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-487qg\" (UniqueName: \"kubernetes.io/projected/dd7b43d5-aa70-4c36-b1bb-4504bd262439-kube-api-access-487qg\") pod \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487499 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-logs\") pod \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487534 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-config-data\") pod \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487560 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-scripts\") pod \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487609 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-horizon-secret-key\") pod \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487634 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd7b43d5-aa70-4c36-b1bb-4504bd262439-horizon-secret-key\") pod \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487665 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-horizon-secret-key\") pod \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487686 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-config-data\") pod \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487733 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-nb\") pod \"92c7f906-f96b-4d55-b771-6d6425b32b85\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487779 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk5m5\" (UniqueName: \"kubernetes.io/projected/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-kube-api-access-xk5m5\") pod \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\" (UID: \"4ff71896-166b-4b5d-b11e-ebfb7d340e3d\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487811 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-scripts\") pod \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\" (UID: \"dd7b43d5-aa70-4c36-b1bb-4504bd262439\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487839 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn27t\" (UniqueName: \"kubernetes.io/projected/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-kube-api-access-hn27t\") pod \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\" (UID: \"0b297a5e-7f42-435d-8c8e-fa8d113f8d74\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.487876 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68mnw\" (UniqueName: \"kubernetes.io/projected/92c7f906-f96b-4d55-b771-6d6425b32b85-kube-api-access-68mnw\") pod \"92c7f906-f96b-4d55-b771-6d6425b32b85\" (UID: \"92c7f906-f96b-4d55-b771-6d6425b32b85\") " Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.488315 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-scripts" (OuterVolumeSpecName: "scripts") pod "0b297a5e-7f42-435d-8c8e-fa8d113f8d74" (UID: "0b297a5e-7f42-435d-8c8e-fa8d113f8d74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.488504 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7b43d5-aa70-4c36-b1bb-4504bd262439-logs" (OuterVolumeSpecName: "logs") pod "dd7b43d5-aa70-4c36-b1bb-4504bd262439" (UID: "dd7b43d5-aa70-4c36-b1bb-4504bd262439"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.489225 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-scripts" (OuterVolumeSpecName: "scripts") pod "dd7b43d5-aa70-4c36-b1bb-4504bd262439" (UID: "dd7b43d5-aa70-4c36-b1bb-4504bd262439"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.489604 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-config-data" (OuterVolumeSpecName: "config-data") pod "4ff71896-166b-4b5d-b11e-ebfb7d340e3d" (UID: "4ff71896-166b-4b5d-b11e-ebfb7d340e3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.489769 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-logs" (OuterVolumeSpecName: "logs") pod "4ff71896-166b-4b5d-b11e-ebfb7d340e3d" (UID: "4ff71896-166b-4b5d-b11e-ebfb7d340e3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.489935 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-config-data" (OuterVolumeSpecName: "config-data") pod "dd7b43d5-aa70-4c36-b1bb-4504bd262439" (UID: "dd7b43d5-aa70-4c36-b1bb-4504bd262439"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.493296 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7b43d5-aa70-4c36-b1bb-4504bd262439-kube-api-access-487qg" (OuterVolumeSpecName: "kube-api-access-487qg") pod "dd7b43d5-aa70-4c36-b1bb-4504bd262439" (UID: "dd7b43d5-aa70-4c36-b1bb-4504bd262439"). InnerVolumeSpecName "kube-api-access-487qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.493656 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-kube-api-access-xk5m5" (OuterVolumeSpecName: "kube-api-access-xk5m5") pod "4ff71896-166b-4b5d-b11e-ebfb7d340e3d" (UID: "4ff71896-166b-4b5d-b11e-ebfb7d340e3d"). InnerVolumeSpecName "kube-api-access-xk5m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.493681 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-scripts" (OuterVolumeSpecName: "scripts") pod "4ff71896-166b-4b5d-b11e-ebfb7d340e3d" (UID: "4ff71896-166b-4b5d-b11e-ebfb7d340e3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.494138 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-logs" (OuterVolumeSpecName: "logs") pod "0b297a5e-7f42-435d-8c8e-fa8d113f8d74" (UID: "0b297a5e-7f42-435d-8c8e-fa8d113f8d74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.494607 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-config-data" (OuterVolumeSpecName: "config-data") pod "0b297a5e-7f42-435d-8c8e-fa8d113f8d74" (UID: "0b297a5e-7f42-435d-8c8e-fa8d113f8d74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.495349 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-kube-api-access-hn27t" (OuterVolumeSpecName: "kube-api-access-hn27t") pod "0b297a5e-7f42-435d-8c8e-fa8d113f8d74" (UID: "0b297a5e-7f42-435d-8c8e-fa8d113f8d74"). InnerVolumeSpecName "kube-api-access-hn27t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.497107 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c7f906-f96b-4d55-b771-6d6425b32b85-kube-api-access-68mnw" (OuterVolumeSpecName: "kube-api-access-68mnw") pod "92c7f906-f96b-4d55-b771-6d6425b32b85" (UID: "92c7f906-f96b-4d55-b771-6d6425b32b85"). InnerVolumeSpecName "kube-api-access-68mnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.508982 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0b297a5e-7f42-435d-8c8e-fa8d113f8d74" (UID: "0b297a5e-7f42-435d-8c8e-fa8d113f8d74"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.509702 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4ff71896-166b-4b5d-b11e-ebfb7d340e3d" (UID: "4ff71896-166b-4b5d-b11e-ebfb7d340e3d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.511978 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7b43d5-aa70-4c36-b1bb-4504bd262439-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dd7b43d5-aa70-4c36-b1bb-4504bd262439" (UID: "dd7b43d5-aa70-4c36-b1bb-4504bd262439"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.528899 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "92c7f906-f96b-4d55-b771-6d6425b32b85" (UID: "92c7f906-f96b-4d55-b771-6d6425b32b85"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.532689 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-config" (OuterVolumeSpecName: "config") pod "92c7f906-f96b-4d55-b771-6d6425b32b85" (UID: "92c7f906-f96b-4d55-b771-6d6425b32b85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.536118 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92c7f906-f96b-4d55-b771-6d6425b32b85" (UID: "92c7f906-f96b-4d55-b771-6d6425b32b85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.542236 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92c7f906-f96b-4d55-b771-6d6425b32b85" (UID: "92c7f906-f96b-4d55-b771-6d6425b32b85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.561668 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92c7f906-f96b-4d55-b771-6d6425b32b85" (UID: "92c7f906-f96b-4d55-b771-6d6425b32b85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591202 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591246 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591261 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591271 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591281 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd7b43d5-aa70-4c36-b1bb-4504bd262439-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591290 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591323 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591331 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591340 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-487qg\" (UniqueName: \"kubernetes.io/projected/dd7b43d5-aa70-4c36-b1bb-4504bd262439-kube-api-access-487qg\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591351 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591359 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591367 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591387 4676 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591394 4676 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd7b43d5-aa70-4c36-b1bb-4504bd262439-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591403 4676 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591429 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591436 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c7f906-f96b-4d55-b771-6d6425b32b85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591444 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk5m5\" (UniqueName: \"kubernetes.io/projected/4ff71896-166b-4b5d-b11e-ebfb7d340e3d-kube-api-access-xk5m5\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591451 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd7b43d5-aa70-4c36-b1bb-4504bd262439-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591460 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn27t\" (UniqueName: \"kubernetes.io/projected/0b297a5e-7f42-435d-8c8e-fa8d113f8d74-kube-api-access-hn27t\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.591468 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68mnw\" (UniqueName: \"kubernetes.io/projected/92c7f906-f96b-4d55-b771-6d6425b32b85-kube-api-access-68mnw\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.730351 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797567cdb9-7p8fk" event={"ID":"0b297a5e-7f42-435d-8c8e-fa8d113f8d74","Type":"ContainerDied","Data":"d4a299117ca872f2981d137eaebb140c612fdabbc4e3e470ef6e031c80a0bce6"} Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.730642 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797567cdb9-7p8fk" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.743519 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-694b9fc4d7-xkkn8" event={"ID":"dd7b43d5-aa70-4c36-b1bb-4504bd262439","Type":"ContainerDied","Data":"4152f9721423cdfc38209333d080f3609332e5c5cc25d9ded0d737267f382359"} Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.743536 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694b9fc4d7-xkkn8" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.751434 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5ff79c7c45-k8zk9" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.751450 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5ff79c7c45-k8zk9" event={"ID":"4ff71896-166b-4b5d-b11e-ebfb7d340e3d","Type":"ContainerDied","Data":"dbda9f3c357ed36f88cb2516c4a0699d62520eb292d651052ab02beb2599dbcc"} Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.754357 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" event={"ID":"92c7f906-f96b-4d55-b771-6d6425b32b85","Type":"ContainerDied","Data":"af5442d400f3ec8197d8bc02bc11e29bed515b4e2a1c69855707d8adbf0ab335"} Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.754436 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.810141 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-797567cdb9-7p8fk"] Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.830153 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-797567cdb9-7p8fk"] Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.849828 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-p2f7j"] Jan 24 00:22:22 crc kubenswrapper[4676]: E0124 00:22:22.852303 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 24 00:22:22 crc kubenswrapper[4676]: E0124 00:22:22.852490 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65fh8hfbh5b5h596h58fh6ch66fh585h546h5c6h4h568h6fh5bdh68h5f6h658h75h64bh577h645h545h65ch677h684h587h77h649h694h64fh7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tljs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(54d56910-d4b7-45b1-8699-5af7eaa29b96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.869916 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-p2f7j"] Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.915463 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-694b9fc4d7-xkkn8"] Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.927333 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-694b9fc4d7-xkkn8"] Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.940673 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5ff79c7c45-k8zk9"] Jan 24 00:22:22 crc kubenswrapper[4676]: I0124 00:22:22.946743 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5ff79c7c45-k8zk9"] Jan 24 00:22:23 crc kubenswrapper[4676]: I0124 00:22:23.767449 4676 generic.go:334] "Generic (PLEG): container finished" podID="01c23d24-6fd8-4660-a0f7-815fdf508f5b" containerID="192824bb8ad730af6c36357704c067406da5b951731ba1841c964e1a78d10bc2" exitCode=0 Jan 24 00:22:23 crc kubenswrapper[4676]: I0124 00:22:23.767541 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bpbdz" event={"ID":"01c23d24-6fd8-4660-a0f7-815fdf508f5b","Type":"ContainerDied","Data":"192824bb8ad730af6c36357704c067406da5b951731ba1841c964e1a78d10bc2"} Jan 24 00:22:23 crc kubenswrapper[4676]: I0124 00:22:23.822879 4676 scope.go:117] "RemoveContainer" containerID="3f5282d9d816b83ab02ae74e2aab26c447e6e2a07343b6a6dc5c5f59c5e32711" Jan 24 00:22:23 crc kubenswrapper[4676]: E0124 00:22:23.852187 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 24 00:22:23 crc kubenswrapper[4676]: E0124 00:22:23.852583 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsgpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7klnv_openstack(d1163cc6-7ce1-4f2b-9d4a-f3e215177842): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:22:23 crc kubenswrapper[4676]: E0124 00:22:23.853819 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7klnv" podUID="d1163cc6-7ce1-4f2b-9d4a-f3e215177842" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.018511 4676 scope.go:117] "RemoveContainer" containerID="3b215804cc138ac23930129381e72626b89a03c7f85d05eb606c6576d28a097c" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.099340 4676 scope.go:117] "RemoveContainer" containerID="9b6d1f8ce083f680faa19785ed5bbaf58ec2053f4ead9760d1b30372e7ebb323" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.152491 4676 scope.go:117] "RemoveContainer" containerID="eecf36d183fa31e03de5954a2aec194cfc8588694d23fe41c8598d42bf4437a3" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.182189 4676 scope.go:117] "RemoveContainer" containerID="041801085cb8948d9dc29d8e4969e7d85a03615218568807e333c3bd95e75583" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.273152 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b297a5e-7f42-435d-8c8e-fa8d113f8d74" path="/var/lib/kubelet/pods/0b297a5e-7f42-435d-8c8e-fa8d113f8d74/volumes" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.273618 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff71896-166b-4b5d-b11e-ebfb7d340e3d" path="/var/lib/kubelet/pods/4ff71896-166b-4b5d-b11e-ebfb7d340e3d/volumes" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.273995 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" path="/var/lib/kubelet/pods/92c7f906-f96b-4d55-b771-6d6425b32b85/volumes" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.274609 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7b43d5-aa70-4c36-b1bb-4504bd262439" path="/var/lib/kubelet/pods/dd7b43d5-aa70-4c36-b1bb-4504bd262439/volumes" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.396292 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l49lm"] Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.405339 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f876ddf46-fs7qv"] Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.412138 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf988b4bd-ls7hp"] Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.545560 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-p2f7j" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.605776 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:22:24 crc kubenswrapper[4676]: W0124 00:22:24.676675 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ec0563_46f1_45b0_892b_352d088f9517.slice/crio-07d4384811e1f2313a45aceab532176f3c3d05348e79e5d4955b534eb2e2868c WatchSource:0}: Error finding container 07d4384811e1f2313a45aceab532176f3c3d05348e79e5d4955b534eb2e2868c: Status 404 returned error can't find the container with id 07d4384811e1f2313a45aceab532176f3c3d05348e79e5d4955b534eb2e2868c Jan 24 00:22:24 crc kubenswrapper[4676]: W0124 00:22:24.689580 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac7dce6b_3bd9_4ad9_9485_83d9384b8bad.slice/crio-9bb78a03e6699d79a2bf36213743a6fe1a49edf74f9acac38e1b0834e4817a68 WatchSource:0}: Error finding container 9bb78a03e6699d79a2bf36213743a6fe1a49edf74f9acac38e1b0834e4817a68: Status 404 returned error can't find the container with id 9bb78a03e6699d79a2bf36213743a6fe1a49edf74f9acac38e1b0834e4817a68 Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.786015 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l49lm" event={"ID":"a7327fe7-179a-4492-8823-94b1067c17d4","Type":"ContainerStarted","Data":"02e78984834a58ca6226d3263c50f11bea3fc65e0cf4f6d946dcccb6c1c331db"} Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.789836 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"77ec0563-46f1-45b0-892b-352d088f9517","Type":"ContainerStarted","Data":"07d4384811e1f2313a45aceab532176f3c3d05348e79e5d4955b534eb2e2868c"} Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.795971 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4kdvp" event={"ID":"c17fc5e6-983e-4678-b22c-c68686271163","Type":"ContainerStarted","Data":"531b26a23fe05a6b22c6e35ea5ff32eb5bb18c34495f3ce046a990e3e2b684b5"} Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.806850 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf988b4bd-ls7hp" event={"ID":"9d2451a0-4896-46e4-9b9e-e309ccdf02f2","Type":"ContainerStarted","Data":"e16c7aef4fd28d5aab8b64abddb94def56344a74657a3085b40c7035b723d59e"} Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.808351 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f876ddf46-fs7qv" event={"ID":"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad","Type":"ContainerStarted","Data":"9bb78a03e6699d79a2bf36213743a6fe1a49edf74f9acac38e1b0834e4817a68"} Jan 24 00:22:24 crc kubenswrapper[4676]: I0124 00:22:24.820799 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4kdvp" podStartSLOduration=5.296070105 podStartE2EDuration="33.820783714s" podCreationTimestamp="2026-01-24 00:21:51 +0000 UTC" firstStartedPulling="2026-01-24 00:21:53.694398031 +0000 UTC m=+1097.724369032" lastFinishedPulling="2026-01-24 00:22:22.21911164 +0000 UTC m=+1126.249082641" observedRunningTime="2026-01-24 00:22:24.817631408 +0000 UTC m=+1128.847602409" watchObservedRunningTime="2026-01-24 00:22:24.820783714 +0000 UTC m=+1128.850754715" Jan 24 00:22:24 crc kubenswrapper[4676]: E0124 00:22:24.828838 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7klnv" podUID="d1163cc6-7ce1-4f2b-9d4a-f3e215177842" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.208545 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.281532 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-combined-ca-bundle\") pod \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.281699 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-config\") pod \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.281721 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mc9c\" (UniqueName: \"kubernetes.io/projected/01c23d24-6fd8-4660-a0f7-815fdf508f5b-kube-api-access-4mc9c\") pod \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\" (UID: \"01c23d24-6fd8-4660-a0f7-815fdf508f5b\") " Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.295126 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c23d24-6fd8-4660-a0f7-815fdf508f5b-kube-api-access-4mc9c" (OuterVolumeSpecName: "kube-api-access-4mc9c") pod "01c23d24-6fd8-4660-a0f7-815fdf508f5b" (UID: "01c23d24-6fd8-4660-a0f7-815fdf508f5b"). InnerVolumeSpecName "kube-api-access-4mc9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.296968 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.352359 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01c23d24-6fd8-4660-a0f7-815fdf508f5b" (UID: "01c23d24-6fd8-4660-a0f7-815fdf508f5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.369480 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-config" (OuterVolumeSpecName: "config") pod "01c23d24-6fd8-4660-a0f7-815fdf508f5b" (UID: "01c23d24-6fd8-4660-a0f7-815fdf508f5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.393324 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.393356 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mc9c\" (UniqueName: \"kubernetes.io/projected/01c23d24-6fd8-4660-a0f7-815fdf508f5b-kube-api-access-4mc9c\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.393400 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c23d24-6fd8-4660-a0f7-815fdf508f5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.906717 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf988b4bd-ls7hp" event={"ID":"9d2451a0-4896-46e4-9b9e-e309ccdf02f2","Type":"ContainerStarted","Data":"ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb"} Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.923093 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f876ddf46-fs7qv" event={"ID":"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad","Type":"ContainerStarted","Data":"bcd4f4f533f53af3d30042718f9bbd0d4265fa269bd879cf2e257782f033723e"} Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.940601 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"643e6d41-6572-4f21-8651-7f577967bfe8","Type":"ContainerStarted","Data":"7eaab757b11ce1880975cf769167065cfd261eeb0ede6ebf82f2d90ebe083765"} Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.949821 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bpbdz" event={"ID":"01c23d24-6fd8-4660-a0f7-815fdf508f5b","Type":"ContainerDied","Data":"21e76890e9eced941fb1d30013143862a35607ef8c3138b99e6cd707709e85c8"} Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.949860 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e76890e9eced941fb1d30013143862a35607ef8c3138b99e6cd707709e85c8" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.949947 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bpbdz" Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.991888 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54d56910-d4b7-45b1-8699-5af7eaa29b96","Type":"ContainerStarted","Data":"f53a3c895796c44b94d61775eb726821f040873261a1a8fbdb79f578ca085ec5"} Jan 24 00:22:25 crc kubenswrapper[4676]: I0124 00:22:25.997947 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l49lm" event={"ID":"a7327fe7-179a-4492-8823-94b1067c17d4","Type":"ContainerStarted","Data":"47e15b8611d25a16d27aae65f29f85b602c09a46ed5cfeeea3914ac200a1d0a7"} Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.017082 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"77ec0563-46f1-45b0-892b-352d088f9517","Type":"ContainerStarted","Data":"7c7d07ff8323e9ccead03e4d66a066ad64d46892c577bd47375ca06cbd3b2418"} Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.061120 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l49lm" podStartSLOduration=19.061098386 podStartE2EDuration="19.061098386s" podCreationTimestamp="2026-01-24 00:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:26.056924529 +0000 UTC m=+1130.086895590" watchObservedRunningTime="2026-01-24 00:22:26.061098386 +0000 UTC m=+1130.091069397" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.129876 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r22sn"] Jan 24 00:22:26 crc kubenswrapper[4676]: E0124 00:22:26.130223 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="init" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.130239 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="init" Jan 24 00:22:26 crc kubenswrapper[4676]: E0124 00:22:26.130249 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="dnsmasq-dns" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.130256 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="dnsmasq-dns" Jan 24 00:22:26 crc kubenswrapper[4676]: E0124 00:22:26.130264 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c23d24-6fd8-4660-a0f7-815fdf508f5b" containerName="neutron-db-sync" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.130270 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c23d24-6fd8-4660-a0f7-815fdf508f5b" containerName="neutron-db-sync" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.130456 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c7f906-f96b-4d55-b771-6d6425b32b85" containerName="dnsmasq-dns" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.130472 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c23d24-6fd8-4660-a0f7-815fdf508f5b" containerName="neutron-db-sync" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.131238 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.193929 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r22sn"] Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.304287 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-756b7b5794-xhgs6"] Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.305537 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.311814 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.312064 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xf4w5" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.312267 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.312388 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.320239 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.320301 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-svc\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.320325 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.320368 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/da4decd9-74a7-4128-a34b-e30de6318a09-kube-api-access-cq96k\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.320415 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-config\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.320431 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.321075 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-756b7b5794-xhgs6"] Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426195 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-config\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426257 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426309 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkwcg\" (UniqueName: \"kubernetes.io/projected/1d557c78-075a-44f7-a530-860ae3ec8ffd-kube-api-access-kkwcg\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426364 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-config\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426399 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-combined-ca-bundle\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426468 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-httpd-config\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426489 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-ovndb-tls-certs\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426507 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426554 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-svc\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426577 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.426627 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/da4decd9-74a7-4128-a34b-e30de6318a09-kube-api-access-cq96k\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.427078 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-config\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.452666 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/da4decd9-74a7-4128-a34b-e30de6318a09-kube-api-access-cq96k\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.467768 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.468528 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.468685 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-svc\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.469145 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-r22sn\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.527594 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkwcg\" (UniqueName: \"kubernetes.io/projected/1d557c78-075a-44f7-a530-860ae3ec8ffd-kube-api-access-kkwcg\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.527644 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-config\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.527661 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-combined-ca-bundle\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.527677 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-httpd-config\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.527689 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-ovndb-tls-certs\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.532636 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-config\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.544323 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-httpd-config\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.544841 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-ovndb-tls-certs\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.545254 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-combined-ca-bundle\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.550329 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkwcg\" (UniqueName: \"kubernetes.io/projected/1d557c78-075a-44f7-a530-860ae3ec8ffd-kube-api-access-kkwcg\") pod \"neutron-756b7b5794-xhgs6\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.661888 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:26 crc kubenswrapper[4676]: I0124 00:22:26.758949 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.046677 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"77ec0563-46f1-45b0-892b-352d088f9517","Type":"ContainerStarted","Data":"1989fb85f0013ac25e4d8fb0c72cdef0a1a08075037f574a25a6701244b8dd50"} Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.053245 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf988b4bd-ls7hp" event={"ID":"9d2451a0-4896-46e4-9b9e-e309ccdf02f2","Type":"ContainerStarted","Data":"9b6a2140f368edfea407714ce9a9afb7bd7057997de5cfab1861b7a16523894e"} Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.067748 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.067804 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.110186 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.11017168 podStartE2EDuration="21.11017168s" podCreationTimestamp="2026-01-24 00:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:27.080124007 +0000 UTC m=+1131.110095008" watchObservedRunningTime="2026-01-24 00:22:27.11017168 +0000 UTC m=+1131.140142681" Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.111899 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f876ddf46-fs7qv" event={"ID":"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad","Type":"ContainerStarted","Data":"c8c367e4ea3e593fa82e51f67b7f8371ac3844af82364461f69e4513b638a905"} Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.113206 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bf988b4bd-ls7hp" podStartSLOduration=27.480296154 podStartE2EDuration="28.113200942s" podCreationTimestamp="2026-01-24 00:21:59 +0000 UTC" firstStartedPulling="2026-01-24 00:22:24.706441117 +0000 UTC m=+1128.736412108" lastFinishedPulling="2026-01-24 00:22:25.339345895 +0000 UTC m=+1129.369316896" observedRunningTime="2026-01-24 00:22:27.110325275 +0000 UTC m=+1131.140296276" watchObservedRunningTime="2026-01-24 00:22:27.113200942 +0000 UTC m=+1131.143171943" Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.138786 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f876ddf46-fs7qv" podStartSLOduration=27.513597696 podStartE2EDuration="28.1387688s" podCreationTimestamp="2026-01-24 00:21:59 +0000 UTC" firstStartedPulling="2026-01-24 00:22:24.706322963 +0000 UTC m=+1128.736293964" lastFinishedPulling="2026-01-24 00:22:25.331494067 +0000 UTC m=+1129.361465068" observedRunningTime="2026-01-24 00:22:27.135415658 +0000 UTC m=+1131.165386659" watchObservedRunningTime="2026-01-24 00:22:27.1387688 +0000 UTC m=+1131.168739801" Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.188841 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"643e6d41-6572-4f21-8651-7f577967bfe8","Type":"ContainerStarted","Data":"b6416b75b9719da0d296633782ac2fc04e1dd20b2e9d142f5025be8c5a5d754d"} Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.209208 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.232086 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.447906 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-756b7b5794-xhgs6"] Jan 24 00:22:27 crc kubenswrapper[4676]: I0124 00:22:27.801055 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r22sn"] Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.197146 4676 generic.go:334] "Generic (PLEG): container finished" podID="c17fc5e6-983e-4678-b22c-c68686271163" containerID="531b26a23fe05a6b22c6e35ea5ff32eb5bb18c34495f3ce046a990e3e2b684b5" exitCode=0 Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.197347 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4kdvp" event={"ID":"c17fc5e6-983e-4678-b22c-c68686271163","Type":"ContainerDied","Data":"531b26a23fe05a6b22c6e35ea5ff32eb5bb18c34495f3ce046a990e3e2b684b5"} Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.201913 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" event={"ID":"da4decd9-74a7-4128-a34b-e30de6318a09","Type":"ContainerStarted","Data":"a839f941eaa0eb3fef79a831fc52f7ec100168032ef563b034304acff5b0a94c"} Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.215960 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"643e6d41-6572-4f21-8651-7f577967bfe8","Type":"ContainerStarted","Data":"e3769b8edac359e6c5b4b6e492f9b2bb7ee70731b0b4c2778ff42c7ddf474ff8"} Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.220190 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756b7b5794-xhgs6" event={"ID":"1d557c78-075a-44f7-a530-860ae3ec8ffd","Type":"ContainerStarted","Data":"ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a"} Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.220221 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.220233 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756b7b5794-xhgs6" event={"ID":"1d557c78-075a-44f7-a530-860ae3ec8ffd","Type":"ContainerStarted","Data":"90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e"} Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.220246 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.220254 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756b7b5794-xhgs6" event={"ID":"1d557c78-075a-44f7-a530-860ae3ec8ffd","Type":"ContainerStarted","Data":"2a37de0875eb36fe5f67aef1544f7cf8a8b0cd17d7552f5a5cd3065c861671b8"} Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.221040 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.260638 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.260618258 podStartE2EDuration="22.260618258s" podCreationTimestamp="2026-01-24 00:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:28.233759132 +0000 UTC m=+1132.263730133" watchObservedRunningTime="2026-01-24 00:22:28.260618258 +0000 UTC m=+1132.290589259" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.270788 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-756b7b5794-xhgs6" podStartSLOduration=2.270774938 podStartE2EDuration="2.270774938s" podCreationTimestamp="2026-01-24 00:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:28.264142166 +0000 UTC m=+1132.294113167" watchObservedRunningTime="2026-01-24 00:22:28.270774938 +0000 UTC m=+1132.300745939" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.621932 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b7b989687-7nkv4"] Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.625407 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.631393 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.632053 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.654424 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b7b989687-7nkv4"] Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.693867 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-public-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.693924 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-httpd-config\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.693982 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-config\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.694037 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-ovndb-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.694060 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-internal-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.694083 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-combined-ca-bundle\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.694266 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm9l4\" (UniqueName: \"kubernetes.io/projected/2e3ec968-d892-487a-930a-44c79123d54b-kube-api-access-nm9l4\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.796422 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-config\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.796504 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-ovndb-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.796528 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-internal-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.796554 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-combined-ca-bundle\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.796592 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm9l4\" (UniqueName: \"kubernetes.io/projected/2e3ec968-d892-487a-930a-44c79123d54b-kube-api-access-nm9l4\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.796648 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-public-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.796675 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-httpd-config\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.801300 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-combined-ca-bundle\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.805055 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-public-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.806342 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-ovndb-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.806395 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-config\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.808301 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-internal-tls-certs\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.809159 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-httpd-config\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.812063 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm9l4\" (UniqueName: \"kubernetes.io/projected/2e3ec968-d892-487a-930a-44c79123d54b-kube-api-access-nm9l4\") pod \"neutron-b7b989687-7nkv4\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:28 crc kubenswrapper[4676]: I0124 00:22:28.960812 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.259909 4676 generic.go:334] "Generic (PLEG): container finished" podID="da4decd9-74a7-4128-a34b-e30de6318a09" containerID="8f47f3dbe46f5da8dd23954963ad8dc0cd2663f72dd53c26d9fe4db7d0341294" exitCode=0 Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.259972 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" event={"ID":"da4decd9-74a7-4128-a34b-e30de6318a09","Type":"ContainerDied","Data":"8f47f3dbe46f5da8dd23954963ad8dc0cd2663f72dd53c26d9fe4db7d0341294"} Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.268405 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g5dnq" event={"ID":"04db8ba1-c0de-4985-8a48-ead625786472","Type":"ContainerStarted","Data":"9f879edfb9e19e2ba5d1f7acc73cb74d671c4cd334c91721768ab3acc3bba0e9"} Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.289519 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g5dnq" podStartSLOduration=4.217343798 podStartE2EDuration="38.289482669s" podCreationTimestamp="2026-01-24 00:21:51 +0000 UTC" firstStartedPulling="2026-01-24 00:21:53.693594727 +0000 UTC m=+1097.723565728" lastFinishedPulling="2026-01-24 00:22:27.765733608 +0000 UTC m=+1131.795704599" observedRunningTime="2026-01-24 00:22:29.284107376 +0000 UTC m=+1133.314078377" watchObservedRunningTime="2026-01-24 00:22:29.289482669 +0000 UTC m=+1133.319453670" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.591044 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4kdvp" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.709832 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-scripts\") pod \"c17fc5e6-983e-4678-b22c-c68686271163\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.710008 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-combined-ca-bundle\") pod \"c17fc5e6-983e-4678-b22c-c68686271163\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.710068 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c17fc5e6-983e-4678-b22c-c68686271163-logs\") pod \"c17fc5e6-983e-4678-b22c-c68686271163\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.710086 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-config-data\") pod \"c17fc5e6-983e-4678-b22c-c68686271163\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.710157 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv7wz\" (UniqueName: \"kubernetes.io/projected/c17fc5e6-983e-4678-b22c-c68686271163-kube-api-access-nv7wz\") pod \"c17fc5e6-983e-4678-b22c-c68686271163\" (UID: \"c17fc5e6-983e-4678-b22c-c68686271163\") " Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.710430 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c17fc5e6-983e-4678-b22c-c68686271163-logs" (OuterVolumeSpecName: "logs") pod "c17fc5e6-983e-4678-b22c-c68686271163" (UID: "c17fc5e6-983e-4678-b22c-c68686271163"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.711028 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c17fc5e6-983e-4678-b22c-c68686271163-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.715238 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-scripts" (OuterVolumeSpecName: "scripts") pod "c17fc5e6-983e-4678-b22c-c68686271163" (UID: "c17fc5e6-983e-4678-b22c-c68686271163"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.719203 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17fc5e6-983e-4678-b22c-c68686271163-kube-api-access-nv7wz" (OuterVolumeSpecName: "kube-api-access-nv7wz") pod "c17fc5e6-983e-4678-b22c-c68686271163" (UID: "c17fc5e6-983e-4678-b22c-c68686271163"). InnerVolumeSpecName "kube-api-access-nv7wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.740677 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c17fc5e6-983e-4678-b22c-c68686271163" (UID: "c17fc5e6-983e-4678-b22c-c68686271163"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.752692 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-config-data" (OuterVolumeSpecName: "config-data") pod "c17fc5e6-983e-4678-b22c-c68686271163" (UID: "c17fc5e6-983e-4678-b22c-c68686271163"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.812732 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.812770 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.812783 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17fc5e6-983e-4678-b22c-c68686271163-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.812792 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv7wz\" (UniqueName: \"kubernetes.io/projected/c17fc5e6-983e-4678-b22c-c68686271163-kube-api-access-nv7wz\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:29 crc kubenswrapper[4676]: I0124 00:22:29.873629 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b7b989687-7nkv4"] Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.172024 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.172274 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.332099 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b989687-7nkv4" event={"ID":"2e3ec968-d892-487a-930a-44c79123d54b","Type":"ContainerStarted","Data":"59d763231e986383c1ba6301030139cf376841912045d87d7f5c53b19c30ff2a"} Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.332139 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.332155 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.335862 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4kdvp" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.337385 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4kdvp" event={"ID":"c17fc5e6-983e-4678-b22c-c68686271163","Type":"ContainerDied","Data":"cb82773a377e4565776a6be3bc254bf675dcbd0497f37c26a350eb27b7526931"} Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.337428 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb82773a377e4565776a6be3bc254bf675dcbd0497f37c26a350eb27b7526931" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.528050 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-765f6cdf58-5q9v9"] Jan 24 00:22:30 crc kubenswrapper[4676]: E0124 00:22:30.528466 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17fc5e6-983e-4678-b22c-c68686271163" containerName="placement-db-sync" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.528485 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17fc5e6-983e-4678-b22c-c68686271163" containerName="placement-db-sync" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.528636 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17fc5e6-983e-4678-b22c-c68686271163" containerName="placement-db-sync" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.529593 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.535071 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.537117 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.537327 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.537488 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6qnnp" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.537578 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.560643 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-765f6cdf58-5q9v9"] Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.630276 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-internal-tls-certs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.630654 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-public-tls-certs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.630680 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-combined-ca-bundle\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.630733 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-scripts\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.630769 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e63383-223a-4247-8589-03ab5a33f980-logs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.630821 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ftlq\" (UniqueName: \"kubernetes.io/projected/21e63383-223a-4247-8589-03ab5a33f980-kube-api-access-2ftlq\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.630845 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-config-data\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.732124 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e63383-223a-4247-8589-03ab5a33f980-logs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.732208 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ftlq\" (UniqueName: \"kubernetes.io/projected/21e63383-223a-4247-8589-03ab5a33f980-kube-api-access-2ftlq\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.732235 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-config-data\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.732271 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-internal-tls-certs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.732299 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-public-tls-certs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.732318 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-combined-ca-bundle\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.732365 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-scripts\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.738907 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-scripts\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.739165 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-config-data\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.739223 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e63383-223a-4247-8589-03ab5a33f980-logs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.740039 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-internal-tls-certs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.742191 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-combined-ca-bundle\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.742814 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21e63383-223a-4247-8589-03ab5a33f980-public-tls-certs\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.764056 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ftlq\" (UniqueName: \"kubernetes.io/projected/21e63383-223a-4247-8589-03ab5a33f980-kube-api-access-2ftlq\") pod \"placement-765f6cdf58-5q9v9\" (UID: \"21e63383-223a-4247-8589-03ab5a33f980\") " pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:30 crc kubenswrapper[4676]: I0124 00:22:30.866487 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:31 crc kubenswrapper[4676]: I0124 00:22:31.374554 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b989687-7nkv4" event={"ID":"2e3ec968-d892-487a-930a-44c79123d54b","Type":"ContainerStarted","Data":"b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae"} Jan 24 00:22:31 crc kubenswrapper[4676]: I0124 00:22:31.374852 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b989687-7nkv4" event={"ID":"2e3ec968-d892-487a-930a-44c79123d54b","Type":"ContainerStarted","Data":"bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724"} Jan 24 00:22:31 crc kubenswrapper[4676]: I0124 00:22:31.375232 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:22:31 crc kubenswrapper[4676]: I0124 00:22:31.402937 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b7b989687-7nkv4" podStartSLOduration=3.402922424 podStartE2EDuration="3.402922424s" podCreationTimestamp="2026-01-24 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:31.401112709 +0000 UTC m=+1135.431083710" watchObservedRunningTime="2026-01-24 00:22:31.402922424 +0000 UTC m=+1135.432893425" Jan 24 00:22:31 crc kubenswrapper[4676]: I0124 00:22:31.616208 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-765f6cdf58-5q9v9"] Jan 24 00:22:33 crc kubenswrapper[4676]: I0124 00:22:33.404748 4676 generic.go:334] "Generic (PLEG): container finished" podID="a7327fe7-179a-4492-8823-94b1067c17d4" containerID="47e15b8611d25a16d27aae65f29f85b602c09a46ed5cfeeea3914ac200a1d0a7" exitCode=0 Jan 24 00:22:33 crc kubenswrapper[4676]: I0124 00:22:33.406769 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l49lm" event={"ID":"a7327fe7-179a-4492-8823-94b1067c17d4","Type":"ContainerDied","Data":"47e15b8611d25a16d27aae65f29f85b602c09a46ed5cfeeea3914ac200a1d0a7"} Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.415408 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54d56910-d4b7-45b1-8699-5af7eaa29b96","Type":"ContainerStarted","Data":"dae61f7a8060ccdb486fd53e86c1fca33bbb71e0fb9f0d456ad2e0a5a8007cbd"} Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.417457 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" event={"ID":"da4decd9-74a7-4128-a34b-e30de6318a09","Type":"ContainerStarted","Data":"4ac4a65e17730cc898b7a1caa2cf6f59c848e5dbfb4544495d7d26d412afc515"} Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.417626 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.423451 4676 generic.go:334] "Generic (PLEG): container finished" podID="04db8ba1-c0de-4985-8a48-ead625786472" containerID="9f879edfb9e19e2ba5d1f7acc73cb74d671c4cd334c91721768ab3acc3bba0e9" exitCode=0 Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.423624 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g5dnq" event={"ID":"04db8ba1-c0de-4985-8a48-ead625786472","Type":"ContainerDied","Data":"9f879edfb9e19e2ba5d1f7acc73cb74d671c4cd334c91721768ab3acc3bba0e9"} Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.435266 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-765f6cdf58-5q9v9" event={"ID":"21e63383-223a-4247-8589-03ab5a33f980","Type":"ContainerStarted","Data":"868be920f6388bdbc355ce503a74e95b8e95bf93569e1a5a274cfdb739ad2428"} Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.435495 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.435559 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-765f6cdf58-5q9v9" event={"ID":"21e63383-223a-4247-8589-03ab5a33f980","Type":"ContainerStarted","Data":"8c36a476f94bdc3802c17e235d4cf6c0cf6f6b6b85d7555584c90cce1257c52c"} Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.435627 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-765f6cdf58-5q9v9" event={"ID":"21e63383-223a-4247-8589-03ab5a33f980","Type":"ContainerStarted","Data":"76364b6f4d0e1a256cba5e253a99692a3f603a368a9595e44e963e8a0e1ff19c"} Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.435698 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.452994 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" podStartSLOduration=8.452979474 podStartE2EDuration="8.452979474s" podCreationTimestamp="2026-01-24 00:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:34.437919966 +0000 UTC m=+1138.467890967" watchObservedRunningTime="2026-01-24 00:22:34.452979474 +0000 UTC m=+1138.482950475" Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.504279 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-765f6cdf58-5q9v9" podStartSLOduration=4.504260964 podStartE2EDuration="4.504260964s" podCreationTimestamp="2026-01-24 00:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:34.498039414 +0000 UTC m=+1138.528010415" watchObservedRunningTime="2026-01-24 00:22:34.504260964 +0000 UTC m=+1138.534231965" Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.678540 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:34 crc kubenswrapper[4676]: I0124 00:22:34.922868 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.028440 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-scripts\") pod \"a7327fe7-179a-4492-8823-94b1067c17d4\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.028560 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-credential-keys\") pod \"a7327fe7-179a-4492-8823-94b1067c17d4\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.028620 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-fernet-keys\") pod \"a7327fe7-179a-4492-8823-94b1067c17d4\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.028672 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-config-data\") pod \"a7327fe7-179a-4492-8823-94b1067c17d4\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.028691 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjz2c\" (UniqueName: \"kubernetes.io/projected/a7327fe7-179a-4492-8823-94b1067c17d4-kube-api-access-wjz2c\") pod \"a7327fe7-179a-4492-8823-94b1067c17d4\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.028714 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-combined-ca-bundle\") pod \"a7327fe7-179a-4492-8823-94b1067c17d4\" (UID: \"a7327fe7-179a-4492-8823-94b1067c17d4\") " Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.033211 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-scripts" (OuterVolumeSpecName: "scripts") pod "a7327fe7-179a-4492-8823-94b1067c17d4" (UID: "a7327fe7-179a-4492-8823-94b1067c17d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.034599 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a7327fe7-179a-4492-8823-94b1067c17d4" (UID: "a7327fe7-179a-4492-8823-94b1067c17d4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.035295 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a7327fe7-179a-4492-8823-94b1067c17d4" (UID: "a7327fe7-179a-4492-8823-94b1067c17d4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.043497 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7327fe7-179a-4492-8823-94b1067c17d4-kube-api-access-wjz2c" (OuterVolumeSpecName: "kube-api-access-wjz2c") pod "a7327fe7-179a-4492-8823-94b1067c17d4" (UID: "a7327fe7-179a-4492-8823-94b1067c17d4"). InnerVolumeSpecName "kube-api-access-wjz2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.057541 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-config-data" (OuterVolumeSpecName: "config-data") pod "a7327fe7-179a-4492-8823-94b1067c17d4" (UID: "a7327fe7-179a-4492-8823-94b1067c17d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.057642 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7327fe7-179a-4492-8823-94b1067c17d4" (UID: "a7327fe7-179a-4492-8823-94b1067c17d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.131293 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjz2c\" (UniqueName: \"kubernetes.io/projected/a7327fe7-179a-4492-8823-94b1067c17d4-kube-api-access-wjz2c\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.131333 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.131342 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.131351 4676 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.131360 4676 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.131371 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7327fe7-179a-4492-8823-94b1067c17d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.457136 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l49lm" event={"ID":"a7327fe7-179a-4492-8823-94b1067c17d4","Type":"ContainerDied","Data":"02e78984834a58ca6226d3263c50f11bea3fc65e0cf4f6d946dcccb6c1c331db"} Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.457411 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e78984834a58ca6226d3263c50f11bea3fc65e0cf4f6d946dcccb6c1c331db" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.457847 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l49lm" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.516183 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-849b597d57-kw79c"] Jan 24 00:22:35 crc kubenswrapper[4676]: E0124 00:22:35.516559 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7327fe7-179a-4492-8823-94b1067c17d4" containerName="keystone-bootstrap" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.516578 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7327fe7-179a-4492-8823-94b1067c17d4" containerName="keystone-bootstrap" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.516788 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7327fe7-179a-4492-8823-94b1067c17d4" containerName="keystone-bootstrap" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.517281 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.521546 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.521812 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.521980 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fcgg9" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.522094 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.524452 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.524730 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.534006 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-849b597d57-kw79c"] Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.639708 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-public-tls-certs\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.639781 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-config-data\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.639801 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd9gh\" (UniqueName: \"kubernetes.io/projected/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-kube-api-access-rd9gh\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.639838 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-fernet-keys\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.639903 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-internal-tls-certs\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.639926 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-combined-ca-bundle\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.643779 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-credential-keys\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.643948 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-scripts\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.758862 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-scripts\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.758918 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-public-tls-certs\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.758949 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-config-data\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.758967 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd9gh\" (UniqueName: \"kubernetes.io/projected/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-kube-api-access-rd9gh\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.759000 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-fernet-keys\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.764919 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-internal-tls-certs\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.770111 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-combined-ca-bundle\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.770170 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-credential-keys\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.775773 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-scripts\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.776084 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-internal-tls-certs\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.776207 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-credential-keys\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.776861 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-fernet-keys\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.777985 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-public-tls-certs\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.779992 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-config-data\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.780948 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-combined-ca-bundle\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.792126 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd9gh\" (UniqueName: \"kubernetes.io/projected/29bf9e4b-4fb3-41f4-9280-f7ea2e61a844-kube-api-access-rd9gh\") pod \"keystone-849b597d57-kw79c\" (UID: \"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844\") " pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.834822 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:35 crc kubenswrapper[4676]: I0124 00:22:35.959162 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.004609 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qvjc\" (UniqueName: \"kubernetes.io/projected/04db8ba1-c0de-4985-8a48-ead625786472-kube-api-access-7qvjc\") pod \"04db8ba1-c0de-4985-8a48-ead625786472\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.004718 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-db-sync-config-data\") pod \"04db8ba1-c0de-4985-8a48-ead625786472\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.006477 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-combined-ca-bundle\") pod \"04db8ba1-c0de-4985-8a48-ead625786472\" (UID: \"04db8ba1-c0de-4985-8a48-ead625786472\") " Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.011802 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "04db8ba1-c0de-4985-8a48-ead625786472" (UID: "04db8ba1-c0de-4985-8a48-ead625786472"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.059183 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04db8ba1-c0de-4985-8a48-ead625786472-kube-api-access-7qvjc" (OuterVolumeSpecName: "kube-api-access-7qvjc") pod "04db8ba1-c0de-4985-8a48-ead625786472" (UID: "04db8ba1-c0de-4985-8a48-ead625786472"). InnerVolumeSpecName "kube-api-access-7qvjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.078163 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04db8ba1-c0de-4985-8a48-ead625786472" (UID: "04db8ba1-c0de-4985-8a48-ead625786472"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.110301 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qvjc\" (UniqueName: \"kubernetes.io/projected/04db8ba1-c0de-4985-8a48-ead625786472-kube-api-access-7qvjc\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.110336 4676 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.110347 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db8ba1-c0de-4985-8a48-ead625786472-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.382715 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-849b597d57-kw79c"] Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.504763 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g5dnq" event={"ID":"04db8ba1-c0de-4985-8a48-ead625786472","Type":"ContainerDied","Data":"534f710da5fb3c7907cdb2b5714b0d256a8306b5476b74f728dc51abaa10f9c0"} Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.504799 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534f710da5fb3c7907cdb2b5714b0d256a8306b5476b74f728dc51abaa10f9c0" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.504858 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g5dnq" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.522447 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-849b597d57-kw79c" event={"ID":"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844","Type":"ContainerStarted","Data":"9d7106c5a67270c418b542070c878ce61db9ee79ef8f26b080376fbfa65270da"} Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.737362 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68bd7fb46c-sflbz"] Jan 24 00:22:36 crc kubenswrapper[4676]: E0124 00:22:36.737855 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04db8ba1-c0de-4985-8a48-ead625786472" containerName="barbican-db-sync" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.737870 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="04db8ba1-c0de-4985-8a48-ead625786472" containerName="barbican-db-sync" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.738048 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="04db8ba1-c0de-4985-8a48-ead625786472" containerName="barbican-db-sync" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.738980 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.742412 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-96w42" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.742616 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.742836 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.766594 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68bd7fb46c-sflbz"] Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.781764 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-79ccd69c74-nj8k6"] Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.783163 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.800904 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.813440 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-79ccd69c74-nj8k6"] Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823227 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-config-data-custom\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823269 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-config-data-custom\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823305 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05052ce-062e-423c-80cf-78349e42718f-logs\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823336 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxbwp\" (UniqueName: \"kubernetes.io/projected/a05052ce-062e-423c-80cf-78349e42718f-kube-api-access-mxbwp\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823354 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p982\" (UniqueName: \"kubernetes.io/projected/009f35e0-3a98-453b-b92e-8db9e5c92798-kube-api-access-2p982\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823395 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-combined-ca-bundle\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823415 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-combined-ca-bundle\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823444 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-config-data\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823461 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-config-data\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.823508 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009f35e0-3a98-453b-b92e-8db9e5c92798-logs\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.899493 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r22sn"] Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.900062 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" podUID="da4decd9-74a7-4128-a34b-e30de6318a09" containerName="dnsmasq-dns" containerID="cri-o://4ac4a65e17730cc898b7a1caa2cf6f59c848e5dbfb4544495d7d26d412afc515" gracePeriod=10 Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.927298 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009f35e0-3a98-453b-b92e-8db9e5c92798-logs\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.927352 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-config-data-custom\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.927390 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-config-data-custom\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.927418 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05052ce-062e-423c-80cf-78349e42718f-logs\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.927450 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxbwp\" (UniqueName: \"kubernetes.io/projected/a05052ce-062e-423c-80cf-78349e42718f-kube-api-access-mxbwp\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.931041 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p982\" (UniqueName: \"kubernetes.io/projected/009f35e0-3a98-453b-b92e-8db9e5c92798-kube-api-access-2p982\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.931099 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-combined-ca-bundle\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.931123 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-combined-ca-bundle\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.931172 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-config-data\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.931192 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-config-data\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.930751 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05052ce-062e-423c-80cf-78349e42718f-logs\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.928520 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009f35e0-3a98-453b-b92e-8db9e5c92798-logs\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.933256 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-config-data-custom\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.939097 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-combined-ca-bundle\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.943306 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-config-data\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.944870 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-combined-ca-bundle\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.950918 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xg9nt"] Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.952321 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05052ce-062e-423c-80cf-78349e42718f-config-data-custom\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.952479 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.975425 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xg9nt"] Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.977288 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009f35e0-3a98-453b-b92e-8db9e5c92798-config-data\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:36 crc kubenswrapper[4676]: I0124 00:22:36.981705 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p982\" (UniqueName: \"kubernetes.io/projected/009f35e0-3a98-453b-b92e-8db9e5c92798-kube-api-access-2p982\") pod \"barbican-worker-68bd7fb46c-sflbz\" (UID: \"009f35e0-3a98-453b-b92e-8db9e5c92798\") " pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.001414 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxbwp\" (UniqueName: \"kubernetes.io/projected/a05052ce-062e-423c-80cf-78349e42718f-kube-api-access-mxbwp\") pod \"barbican-keystone-listener-79ccd69c74-nj8k6\" (UID: \"a05052ce-062e-423c-80cf-78349e42718f\") " pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.033686 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.033804 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.033881 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-config\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.033927 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j5s9\" (UniqueName: \"kubernetes.io/projected/e1f791cc-7690-4f80-9007-a14fcca8a632-kube-api-access-9j5s9\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.036713 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.036771 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.085461 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.087529 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.087579 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.087589 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.158131 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68bd7fb46c-sflbz" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.164951 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.165019 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.165061 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.165146 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.166470 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-config\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.166484 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.166561 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j5s9\" (UniqueName: \"kubernetes.io/projected/e1f791cc-7690-4f80-9007-a14fcca8a632-kube-api-access-9j5s9\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.183248 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.184610 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.185507 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-config\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.185671 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.200274 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.201213 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d7c68bc86-pdds6"] Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.211631 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.214849 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.225949 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.227083 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j5s9\" (UniqueName: \"kubernetes.io/projected/e1f791cc-7690-4f80-9007-a14fcca8a632-kube-api-access-9j5s9\") pod \"dnsmasq-dns-848cf88cfc-xg9nt\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.249634 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.251704 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d7c68bc86-pdds6"] Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.395643 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data-custom\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.395741 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.395769 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-logs\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.395784 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjfx\" (UniqueName: \"kubernetes.io/projected/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-kube-api-access-bfjfx\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.395798 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-combined-ca-bundle\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.487657 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.497878 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data-custom\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.497961 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.497988 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-logs\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.498003 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-combined-ca-bundle\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.498017 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjfx\" (UniqueName: \"kubernetes.io/projected/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-kube-api-access-bfjfx\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.499024 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-logs\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.507539 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data-custom\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.507570 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.509276 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-combined-ca-bundle\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.521769 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfjfx\" (UniqueName: \"kubernetes.io/projected/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-kube-api-access-bfjfx\") pod \"barbican-api-5d7c68bc86-pdds6\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.542413 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.605901 4676 generic.go:334] "Generic (PLEG): container finished" podID="da4decd9-74a7-4128-a34b-e30de6318a09" containerID="4ac4a65e17730cc898b7a1caa2cf6f59c848e5dbfb4544495d7d26d412afc515" exitCode=0 Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.605963 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" event={"ID":"da4decd9-74a7-4128-a34b-e30de6318a09","Type":"ContainerDied","Data":"4ac4a65e17730cc898b7a1caa2cf6f59c848e5dbfb4544495d7d26d412afc515"} Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.649629 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-849b597d57-kw79c" event={"ID":"29bf9e4b-4fb3-41f4-9280-f7ea2e61a844","Type":"ContainerStarted","Data":"85f4314bb55ddae81a510b55326d40b4a0b3c015ae45e8ec0fc346374d088d81"} Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.649872 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.672166 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-849b597d57-kw79c" podStartSLOduration=2.672149998 podStartE2EDuration="2.672149998s" podCreationTimestamp="2026-01-24 00:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:37.67026375 +0000 UTC m=+1141.700234751" watchObservedRunningTime="2026-01-24 00:22:37.672149998 +0000 UTC m=+1141.702120999" Jan 24 00:22:37 crc kubenswrapper[4676]: I0124 00:22:37.914335 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.021620 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-swift-storage-0\") pod \"da4decd9-74a7-4128-a34b-e30de6318a09\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.024425 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-nb\") pod \"da4decd9-74a7-4128-a34b-e30de6318a09\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.024549 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-sb\") pod \"da4decd9-74a7-4128-a34b-e30de6318a09\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.024569 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-svc\") pod \"da4decd9-74a7-4128-a34b-e30de6318a09\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.024612 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/da4decd9-74a7-4128-a34b-e30de6318a09-kube-api-access-cq96k\") pod \"da4decd9-74a7-4128-a34b-e30de6318a09\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.024639 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-config\") pod \"da4decd9-74a7-4128-a34b-e30de6318a09\" (UID: \"da4decd9-74a7-4128-a34b-e30de6318a09\") " Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.079163 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4decd9-74a7-4128-a34b-e30de6318a09-kube-api-access-cq96k" (OuterVolumeSpecName: "kube-api-access-cq96k") pod "da4decd9-74a7-4128-a34b-e30de6318a09" (UID: "da4decd9-74a7-4128-a34b-e30de6318a09"). InnerVolumeSpecName "kube-api-access-cq96k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.126402 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq96k\" (UniqueName: \"kubernetes.io/projected/da4decd9-74a7-4128-a34b-e30de6318a09-kube-api-access-cq96k\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.127432 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68bd7fb46c-sflbz"] Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.276128 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da4decd9-74a7-4128-a34b-e30de6318a09" (UID: "da4decd9-74a7-4128-a34b-e30de6318a09"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.289893 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da4decd9-74a7-4128-a34b-e30de6318a09" (UID: "da4decd9-74a7-4128-a34b-e30de6318a09"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.308798 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.330247 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da4decd9-74a7-4128-a34b-e30de6318a09" (UID: "da4decd9-74a7-4128-a34b-e30de6318a09"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.339622 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.339661 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.339670 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.344883 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da4decd9-74a7-4128-a34b-e30de6318a09" (UID: "da4decd9-74a7-4128-a34b-e30de6318a09"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.370167 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-config" (OuterVolumeSpecName: "config") pod "da4decd9-74a7-4128-a34b-e30de6318a09" (UID: "da4decd9-74a7-4128-a34b-e30de6318a09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.432413 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-79ccd69c74-nj8k6"] Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.440944 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.440969 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4decd9-74a7-4128-a34b-e30de6318a09-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.614434 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xg9nt"] Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.623954 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d7c68bc86-pdds6"] Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.671398 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68bd7fb46c-sflbz" event={"ID":"009f35e0-3a98-453b-b92e-8db9e5c92798","Type":"ContainerStarted","Data":"68e1769fe551fc67b384bce7f563465fd1d599b8f8bb7ecda58660cc9b0ef80f"} Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.686718 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d7c68bc86-pdds6" event={"ID":"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4","Type":"ContainerStarted","Data":"779fb9c258d57c3a4a4641f3663256b4984fdbbbea094e97da3fbf8189a0afcc"} Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.691909 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" event={"ID":"a05052ce-062e-423c-80cf-78349e42718f","Type":"ContainerStarted","Data":"09c7884475f21bcc53519656b4e2b5987f770f2355ebbed13f07205caaa7988b"} Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.702709 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.705503 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-r22sn" event={"ID":"da4decd9-74a7-4128-a34b-e30de6318a09","Type":"ContainerDied","Data":"a839f941eaa0eb3fef79a831fc52f7ec100168032ef563b034304acff5b0a94c"} Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.705565 4676 scope.go:117] "RemoveContainer" containerID="4ac4a65e17730cc898b7a1caa2cf6f59c848e5dbfb4544495d7d26d412afc515" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.742908 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" event={"ID":"e1f791cc-7690-4f80-9007-a14fcca8a632","Type":"ContainerStarted","Data":"2612852ab10c2bc7fb166122d80d7317364b784205eafc69c7981f567d09ca8e"} Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.853064 4676 scope.go:117] "RemoveContainer" containerID="8f47f3dbe46f5da8dd23954963ad8dc0cd2663f72dd53c26d9fe4db7d0341294" Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.898450 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r22sn"] Jan 24 00:22:38 crc kubenswrapper[4676]: I0124 00:22:38.917429 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-r22sn"] Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.380119 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.380343 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.819754 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d7c68bc86-pdds6" event={"ID":"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4","Type":"ContainerStarted","Data":"dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92"} Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.820026 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d7c68bc86-pdds6" event={"ID":"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4","Type":"ContainerStarted","Data":"f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f"} Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.821076 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.821184 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.826194 4676 generic.go:334] "Generic (PLEG): container finished" podID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerID="cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a" exitCode=0 Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.826265 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" event={"ID":"e1f791cc-7690-4f80-9007-a14fcca8a632","Type":"ContainerDied","Data":"cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a"} Jan 24 00:22:39 crc kubenswrapper[4676]: I0124 00:22:39.911343 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d7c68bc86-pdds6" podStartSLOduration=2.911324877 podStartE2EDuration="2.911324877s" podCreationTimestamp="2026-01-24 00:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:39.883756469 +0000 UTC m=+1143.913727470" watchObservedRunningTime="2026-01-24 00:22:39.911324877 +0000 UTC m=+1143.941295878" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.170804 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-54b7b855f4-s49zw"] Jan 24 00:22:40 crc kubenswrapper[4676]: E0124 00:22:40.171245 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4decd9-74a7-4128-a34b-e30de6318a09" containerName="init" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.171266 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4decd9-74a7-4128-a34b-e30de6318a09" containerName="init" Jan 24 00:22:40 crc kubenswrapper[4676]: E0124 00:22:40.171286 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4decd9-74a7-4128-a34b-e30de6318a09" containerName="dnsmasq-dns" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.171294 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4decd9-74a7-4128-a34b-e30de6318a09" containerName="dnsmasq-dns" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.171539 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4decd9-74a7-4128-a34b-e30de6318a09" containerName="dnsmasq-dns" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.172446 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.173340 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.176074 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54b7b855f4-s49zw"] Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.181856 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.182090 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.281333 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4decd9-74a7-4128-a34b-e30de6318a09" path="/var/lib/kubelet/pods/da4decd9-74a7-4128-a34b-e30de6318a09/volumes" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.310451 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-combined-ca-bundle\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.310510 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-public-tls-certs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.310543 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vkx\" (UniqueName: \"kubernetes.io/projected/c015646b-9361-4b5c-b465-a66d7fc5cc53-kube-api-access-n5vkx\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.310571 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-config-data-custom\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.310595 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c015646b-9361-4b5c-b465-a66d7fc5cc53-logs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.310622 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-config-data\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.310654 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-internal-tls-certs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.324428 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f876ddf46-fs7qv" podUID="ac7dce6b-3bd9-4ad9-9485-83d9384b8bad" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.412110 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c015646b-9361-4b5c-b465-a66d7fc5cc53-logs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.412176 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-config-data\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.412216 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-internal-tls-certs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.412275 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-combined-ca-bundle\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.412592 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c015646b-9361-4b5c-b465-a66d7fc5cc53-logs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.413136 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-public-tls-certs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.413174 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5vkx\" (UniqueName: \"kubernetes.io/projected/c015646b-9361-4b5c-b465-a66d7fc5cc53-kube-api-access-n5vkx\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.413201 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-config-data-custom\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.418193 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-combined-ca-bundle\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.418487 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-config-data\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.421910 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-internal-tls-certs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.422337 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-config-data-custom\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.425494 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c015646b-9361-4b5c-b465-a66d7fc5cc53-public-tls-certs\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.436758 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5vkx\" (UniqueName: \"kubernetes.io/projected/c015646b-9361-4b5c-b465-a66d7fc5cc53-kube-api-access-n5vkx\") pod \"barbican-api-54b7b855f4-s49zw\" (UID: \"c015646b-9361-4b5c-b465-a66d7fc5cc53\") " pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.494114 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.896187 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" event={"ID":"e1f791cc-7690-4f80-9007-a14fcca8a632","Type":"ContainerStarted","Data":"e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b"} Jan 24 00:22:40 crc kubenswrapper[4676]: I0124 00:22:40.925042 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" podStartSLOduration=4.925024486 podStartE2EDuration="4.925024486s" podCreationTimestamp="2026-01-24 00:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:40.919341363 +0000 UTC m=+1144.949312364" watchObservedRunningTime="2026-01-24 00:22:40.925024486 +0000 UTC m=+1144.954995477" Jan 24 00:22:41 crc kubenswrapper[4676]: I0124 00:22:41.078247 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54b7b855f4-s49zw"] Jan 24 00:22:41 crc kubenswrapper[4676]: I0124 00:22:41.907585 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7klnv" event={"ID":"d1163cc6-7ce1-4f2b-9d4a-f3e215177842","Type":"ContainerStarted","Data":"e4299d6b00616548489e178e72141c6cc9961df799e192961e6eb82793cfee65"} Jan 24 00:22:41 crc kubenswrapper[4676]: I0124 00:22:41.908895 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:41 crc kubenswrapper[4676]: I0124 00:22:41.927056 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7klnv" podStartSLOduration=5.797360287 podStartE2EDuration="51.92703827s" podCreationTimestamp="2026-01-24 00:21:50 +0000 UTC" firstStartedPulling="2026-01-24 00:21:53.684897811 +0000 UTC m=+1097.714868812" lastFinishedPulling="2026-01-24 00:22:39.814575804 +0000 UTC m=+1143.844546795" observedRunningTime="2026-01-24 00:22:41.921494611 +0000 UTC m=+1145.951465612" watchObservedRunningTime="2026-01-24 00:22:41.92703827 +0000 UTC m=+1145.957009271" Jan 24 00:22:41 crc kubenswrapper[4676]: I0124 00:22:41.998851 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 00:22:41 crc kubenswrapper[4676]: I0124 00:22:41.998940 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:22:42 crc kubenswrapper[4676]: I0124 00:22:42.659673 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 00:22:42 crc kubenswrapper[4676]: I0124 00:22:42.926801 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b7b855f4-s49zw" event={"ID":"c015646b-9361-4b5c-b465-a66d7fc5cc53","Type":"ContainerStarted","Data":"e496978521b70573fd52b3add57183ce8b6b6e0787cf2107446d0ef525aff3d2"} Jan 24 00:22:42 crc kubenswrapper[4676]: I0124 00:22:42.927871 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b7b855f4-s49zw" event={"ID":"c015646b-9361-4b5c-b465-a66d7fc5cc53","Type":"ContainerStarted","Data":"d0be057cf18c520ed2842a7fe9c36dc7e6de7a1d0254d9e200d5899fbc1c27d5"} Jan 24 00:22:42 crc kubenswrapper[4676]: I0124 00:22:42.929738 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68bd7fb46c-sflbz" event={"ID":"009f35e0-3a98-453b-b92e-8db9e5c92798","Type":"ContainerStarted","Data":"269c96192fce502f2b814bdc98b7995edf5b48d9aa750af264d7848e970f68c6"} Jan 24 00:22:42 crc kubenswrapper[4676]: I0124 00:22:42.956279 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" event={"ID":"a05052ce-062e-423c-80cf-78349e42718f","Type":"ContainerStarted","Data":"f06df1db0559b1557e08daa5944cfb962f47144c86003fc9692dd66eaa9ee71e"} Jan 24 00:22:43 crc kubenswrapper[4676]: I0124 00:22:43.966844 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b7b855f4-s49zw" event={"ID":"c015646b-9361-4b5c-b465-a66d7fc5cc53","Type":"ContainerStarted","Data":"f0bb652a2091409660eb1ee979e75bc34ed86c2ef99c95a31fdf1e84f770ce3a"} Jan 24 00:22:43 crc kubenswrapper[4676]: I0124 00:22:43.967139 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:43 crc kubenswrapper[4676]: I0124 00:22:43.967151 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:43 crc kubenswrapper[4676]: I0124 00:22:43.974429 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68bd7fb46c-sflbz" event={"ID":"009f35e0-3a98-453b-b92e-8db9e5c92798","Type":"ContainerStarted","Data":"b7aa88ef8c3f568bb1563d37814bc2675d26dd45c4f8e4f2822b1fc05113cf2a"} Jan 24 00:22:43 crc kubenswrapper[4676]: I0124 00:22:43.990329 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" event={"ID":"a05052ce-062e-423c-80cf-78349e42718f","Type":"ContainerStarted","Data":"a0cda41f1acb5a3f79a1ec315685e0142319a74b0456f90b43600bdc9997e6f3"} Jan 24 00:22:43 crc kubenswrapper[4676]: I0124 00:22:43.995921 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-54b7b855f4-s49zw" podStartSLOduration=3.995902549 podStartE2EDuration="3.995902549s" podCreationTimestamp="2026-01-24 00:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:43.988815203 +0000 UTC m=+1148.018786204" watchObservedRunningTime="2026-01-24 00:22:43.995902549 +0000 UTC m=+1148.025873550" Jan 24 00:22:44 crc kubenswrapper[4676]: I0124 00:22:44.022721 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68bd7fb46c-sflbz" podStartSLOduration=3.956530082 podStartE2EDuration="8.022702284s" podCreationTimestamp="2026-01-24 00:22:36 +0000 UTC" firstStartedPulling="2026-01-24 00:22:38.206891001 +0000 UTC m=+1142.236862002" lastFinishedPulling="2026-01-24 00:22:42.273063203 +0000 UTC m=+1146.303034204" observedRunningTime="2026-01-24 00:22:44.012730001 +0000 UTC m=+1148.042701012" watchObservedRunningTime="2026-01-24 00:22:44.022702284 +0000 UTC m=+1148.052673285" Jan 24 00:22:47 crc kubenswrapper[4676]: I0124 00:22:47.491585 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:22:47 crc kubenswrapper[4676]: I0124 00:22:47.515055 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-79ccd69c74-nj8k6" podStartSLOduration=7.682827598 podStartE2EDuration="11.515039755s" podCreationTimestamp="2026-01-24 00:22:36 +0000 UTC" firstStartedPulling="2026-01-24 00:22:38.443853327 +0000 UTC m=+1142.473824328" lastFinishedPulling="2026-01-24 00:22:42.276065494 +0000 UTC m=+1146.306036485" observedRunningTime="2026-01-24 00:22:44.042493036 +0000 UTC m=+1148.072464037" watchObservedRunningTime="2026-01-24 00:22:47.515039755 +0000 UTC m=+1151.545010756" Jan 24 00:22:47 crc kubenswrapper[4676]: I0124 00:22:47.549687 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hpwcp"] Jan 24 00:22:47 crc kubenswrapper[4676]: I0124 00:22:47.549924 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" podUID="2417692e-7845-446e-a60a-bb3842851912" containerName="dnsmasq-dns" containerID="cri-o://d6cc348312351f0aa6a3a89dece34eea57cb75b4f69f18c04d668c9984cdefc4" gracePeriod=10 Jan 24 00:22:48 crc kubenswrapper[4676]: I0124 00:22:48.025493 4676 generic.go:334] "Generic (PLEG): container finished" podID="2417692e-7845-446e-a60a-bb3842851912" containerID="d6cc348312351f0aa6a3a89dece34eea57cb75b4f69f18c04d668c9984cdefc4" exitCode=0 Jan 24 00:22:48 crc kubenswrapper[4676]: I0124 00:22:48.025570 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" event={"ID":"2417692e-7845-446e-a60a-bb3842851912","Type":"ContainerDied","Data":"d6cc348312351f0aa6a3a89dece34eea57cb75b4f69f18c04d668c9984cdefc4"} Jan 24 00:22:49 crc kubenswrapper[4676]: I0124 00:22:49.034011 4676 generic.go:334] "Generic (PLEG): container finished" podID="d1163cc6-7ce1-4f2b-9d4a-f3e215177842" containerID="e4299d6b00616548489e178e72141c6cc9961df799e192961e6eb82793cfee65" exitCode=0 Jan 24 00:22:49 crc kubenswrapper[4676]: I0124 00:22:49.034217 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7klnv" event={"ID":"d1163cc6-7ce1-4f2b-9d4a-f3e215177842","Type":"ContainerDied","Data":"e4299d6b00616548489e178e72141c6cc9961df799e192961e6eb82793cfee65"} Jan 24 00:22:49 crc kubenswrapper[4676]: I0124 00:22:49.734692 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.178673 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.322002 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f876ddf46-fs7qv" podUID="ac7dce6b-3bd9-4ad9-9485-83d9384b8bad" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.864544 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.910220 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-sb\") pod \"2417692e-7845-446e-a60a-bb3842851912\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.910257 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rjk5\" (UniqueName: \"kubernetes.io/projected/2417692e-7845-446e-a60a-bb3842851912-kube-api-access-5rjk5\") pod \"2417692e-7845-446e-a60a-bb3842851912\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.910290 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-swift-storage-0\") pod \"2417692e-7845-446e-a60a-bb3842851912\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.910329 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-config\") pod \"2417692e-7845-446e-a60a-bb3842851912\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.910370 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-nb\") pod \"2417692e-7845-446e-a60a-bb3842851912\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.910527 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-svc\") pod \"2417692e-7845-446e-a60a-bb3842851912\" (UID: \"2417692e-7845-446e-a60a-bb3842851912\") " Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.922104 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7klnv" Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.939480 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2417692e-7845-446e-a60a-bb3842851912-kube-api-access-5rjk5" (OuterVolumeSpecName: "kube-api-access-5rjk5") pod "2417692e-7845-446e-a60a-bb3842851912" (UID: "2417692e-7845-446e-a60a-bb3842851912"). InnerVolumeSpecName "kube-api-access-5rjk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.965068 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2417692e-7845-446e-a60a-bb3842851912" (UID: "2417692e-7845-446e-a60a-bb3842851912"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:50 crc kubenswrapper[4676]: I0124 00:22:50.992821 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2417692e-7845-446e-a60a-bb3842851912" (UID: "2417692e-7845-446e-a60a-bb3842851912"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.012947 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-scripts\") pod \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013005 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsgpg\" (UniqueName: \"kubernetes.io/projected/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-kube-api-access-rsgpg\") pod \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013099 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-db-sync-config-data\") pod \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013134 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-config-data\") pod \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013178 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-combined-ca-bundle\") pod \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013227 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-etc-machine-id\") pod \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\" (UID: \"d1163cc6-7ce1-4f2b-9d4a-f3e215177842\") " Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013618 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013634 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013644 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rjk5\" (UniqueName: \"kubernetes.io/projected/2417692e-7845-446e-a60a-bb3842851912-kube-api-access-5rjk5\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.013676 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d1163cc6-7ce1-4f2b-9d4a-f3e215177842" (UID: "d1163cc6-7ce1-4f2b-9d4a-f3e215177842"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.042802 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d1163cc6-7ce1-4f2b-9d4a-f3e215177842" (UID: "d1163cc6-7ce1-4f2b-9d4a-f3e215177842"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.092004 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7klnv" event={"ID":"d1163cc6-7ce1-4f2b-9d4a-f3e215177842","Type":"ContainerDied","Data":"70e37739b2a173b291b1d289a2d2839fa4f44eb620a1fa26acee32cd0b9f1716"} Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.092055 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e37739b2a173b291b1d289a2d2839fa4f44eb620a1fa26acee32cd0b9f1716" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.092154 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7klnv" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.105784 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-scripts" (OuterVolumeSpecName: "scripts") pod "d1163cc6-7ce1-4f2b-9d4a-f3e215177842" (UID: "d1163cc6-7ce1-4f2b-9d4a-f3e215177842"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.121656 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" event={"ID":"2417692e-7845-446e-a60a-bb3842851912","Type":"ContainerDied","Data":"a17194b7f35eb9f8070fa9acebc74f7ada011686f763d785550450b94c66597c"} Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.121714 4676 scope.go:117] "RemoveContainer" containerID="d6cc348312351f0aa6a3a89dece34eea57cb75b4f69f18c04d668c9984cdefc4" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.121860 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hpwcp" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.128046 4676 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.128075 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.128084 4676 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.150652 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-kube-api-access-rsgpg" (OuterVolumeSpecName: "kube-api-access-rsgpg") pod "d1163cc6-7ce1-4f2b-9d4a-f3e215177842" (UID: "d1163cc6-7ce1-4f2b-9d4a-f3e215177842"). InnerVolumeSpecName "kube-api-access-rsgpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.161509 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2417692e-7845-446e-a60a-bb3842851912" (UID: "2417692e-7845-446e-a60a-bb3842851912"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.162665 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-config" (OuterVolumeSpecName: "config") pod "2417692e-7845-446e-a60a-bb3842851912" (UID: "2417692e-7845-446e-a60a-bb3842851912"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.170920 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2417692e-7845-446e-a60a-bb3842851912" (UID: "2417692e-7845-446e-a60a-bb3842851912"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.172661 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1163cc6-7ce1-4f2b-9d4a-f3e215177842" (UID: "d1163cc6-7ce1-4f2b-9d4a-f3e215177842"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.242329 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-config-data" (OuterVolumeSpecName: "config-data") pod "d1163cc6-7ce1-4f2b-9d4a-f3e215177842" (UID: "d1163cc6-7ce1-4f2b-9d4a-f3e215177842"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.243760 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.243779 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.243793 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsgpg\" (UniqueName: \"kubernetes.io/projected/d1163cc6-7ce1-4f2b-9d4a-f3e215177842-kube-api-access-rsgpg\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.243802 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.243811 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.243820 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2417692e-7845-446e-a60a-bb3842851912-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.268467 4676 scope.go:117] "RemoveContainer" containerID="4c0f072637105b041cb28fbed44761b77e903f1e8315f62df0bde2e50a63a7fa" Jan 24 00:22:51 crc kubenswrapper[4676]: E0124 00:22:51.354832 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.469286 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:22:51 crc kubenswrapper[4676]: E0124 00:22:51.469633 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2417692e-7845-446e-a60a-bb3842851912" containerName="dnsmasq-dns" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.469648 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2417692e-7845-446e-a60a-bb3842851912" containerName="dnsmasq-dns" Jan 24 00:22:51 crc kubenswrapper[4676]: E0124 00:22:51.469662 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2417692e-7845-446e-a60a-bb3842851912" containerName="init" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.469669 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2417692e-7845-446e-a60a-bb3842851912" containerName="init" Jan 24 00:22:51 crc kubenswrapper[4676]: E0124 00:22:51.469708 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1163cc6-7ce1-4f2b-9d4a-f3e215177842" containerName="cinder-db-sync" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.469714 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1163cc6-7ce1-4f2b-9d4a-f3e215177842" containerName="cinder-db-sync" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.469895 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2417692e-7845-446e-a60a-bb3842851912" containerName="dnsmasq-dns" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.469917 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1163cc6-7ce1-4f2b-9d4a-f3e215177842" containerName="cinder-db-sync" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.472883 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.486962 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vxll9" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.487048 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.487172 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.487299 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.487845 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hpwcp"] Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.502443 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hpwcp"] Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.524497 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.549327 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.549389 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.549430 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vknvh\" (UniqueName: \"kubernetes.io/projected/fb66b31d-c165-4930-b633-79707cbf22bb-kube-api-access-vknvh\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.549497 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.549519 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.549542 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb66b31d-c165-4930-b633-79707cbf22bb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.584266 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jbptt"] Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.600926 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.646028 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jbptt"] Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652422 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652479 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652523 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652546 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-config\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652563 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652604 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vknvh\" (UniqueName: \"kubernetes.io/projected/fb66b31d-c165-4930-b633-79707cbf22bb-kube-api-access-vknvh\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652650 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxx64\" (UniqueName: \"kubernetes.io/projected/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-kube-api-access-xxx64\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652676 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652692 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652734 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652755 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652778 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb66b31d-c165-4930-b633-79707cbf22bb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.652860 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb66b31d-c165-4930-b633-79707cbf22bb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.674112 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.676004 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.684875 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vknvh\" (UniqueName: \"kubernetes.io/projected/fb66b31d-c165-4930-b633-79707cbf22bb-kube-api-access-vknvh\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.693594 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.702667 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.734420 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.736014 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.747705 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.754143 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxx64\" (UniqueName: \"kubernetes.io/projected/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-kube-api-access-xxx64\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.754187 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.754213 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.754315 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.754359 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-config\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.754415 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.755315 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.755425 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.757420 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-config\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.758278 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.758467 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.771202 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.798663 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.817548 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxx64\" (UniqueName: \"kubernetes.io/projected/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-kube-api-access-xxx64\") pod \"dnsmasq-dns-6578955fd5-jbptt\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.857431 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-scripts\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.857668 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data-custom\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.857748 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnnh\" (UniqueName: \"kubernetes.io/projected/a658cfcc-7de1-467d-8195-21f54cfaaf18-kube-api-access-hbnnh\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.857858 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.857979 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a658cfcc-7de1-467d-8195-21f54cfaaf18-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.858054 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a658cfcc-7de1-467d-8195-21f54cfaaf18-logs\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.858134 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.921509 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963252 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a658cfcc-7de1-467d-8195-21f54cfaaf18-logs\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963533 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963575 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-scripts\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963591 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data-custom\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963606 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbnnh\" (UniqueName: \"kubernetes.io/projected/a658cfcc-7de1-467d-8195-21f54cfaaf18-kube-api-access-hbnnh\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963657 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963722 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a658cfcc-7de1-467d-8195-21f54cfaaf18-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963793 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a658cfcc-7de1-467d-8195-21f54cfaaf18-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.963939 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a658cfcc-7de1-467d-8195-21f54cfaaf18-logs\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.982580 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-scripts\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.988224 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data-custom\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:51 crc kubenswrapper[4676]: I0124 00:22:51.995246 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.006890 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.026924 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbnnh\" (UniqueName: \"kubernetes.io/projected/a658cfcc-7de1-467d-8195-21f54cfaaf18-kube-api-access-hbnnh\") pod \"cinder-api-0\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " pod="openstack/cinder-api-0" Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.138626 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54d56910-d4b7-45b1-8699-5af7eaa29b96","Type":"ContainerStarted","Data":"1b26cefde45473bb50d47ba50a2ec78ece32f247247ca63eb469045a70b4a673"} Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.138775 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="ceilometer-notification-agent" containerID="cri-o://f53a3c895796c44b94d61775eb726821f040873261a1a8fbdb79f578ca085ec5" gracePeriod=30 Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.139607 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.139891 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="proxy-httpd" containerID="cri-o://1b26cefde45473bb50d47ba50a2ec78ece32f247247ca63eb469045a70b4a673" gracePeriod=30 Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.139937 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="sg-core" containerID="cri-o://dae61f7a8060ccdb486fd53e86c1fca33bbb71e0fb9f0d456ad2e0a5a8007cbd" gracePeriod=30 Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.227922 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.275321 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2417692e-7845-446e-a60a-bb3842851912" path="/var/lib/kubelet/pods/2417692e-7845-446e-a60a-bb3842851912/volumes" Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.588597 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d7c68bc86-pdds6" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.615984 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:22:52 crc kubenswrapper[4676]: I0124 00:22:52.704877 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jbptt"] Jan 24 00:22:53 crc kubenswrapper[4676]: I0124 00:22:53.145899 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:22:53 crc kubenswrapper[4676]: I0124 00:22:53.151249 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" event={"ID":"9b4acc7d-6a5f-4544-b73e-61b9a620c17d","Type":"ContainerStarted","Data":"eec257beeff7ba3f585f1d2f0b787abd4407c78ad654a5cbdd01eb786faa02e2"} Jan 24 00:22:53 crc kubenswrapper[4676]: I0124 00:22:53.172496 4676 generic.go:334] "Generic (PLEG): container finished" podID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerID="dae61f7a8060ccdb486fd53e86c1fca33bbb71e0fb9f0d456ad2e0a5a8007cbd" exitCode=2 Jan 24 00:22:53 crc kubenswrapper[4676]: I0124 00:22:53.172536 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54d56910-d4b7-45b1-8699-5af7eaa29b96","Type":"ContainerDied","Data":"dae61f7a8060ccdb486fd53e86c1fca33bbb71e0fb9f0d456ad2e0a5a8007cbd"} Jan 24 00:22:53 crc kubenswrapper[4676]: I0124 00:22:53.174262 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb66b31d-c165-4930-b633-79707cbf22bb","Type":"ContainerStarted","Data":"f56885c2863c57838bc58c1bd3891e3e64ca0124e390aeca0b9564b1dc1517dd"} Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.279954 4676 generic.go:334] "Generic (PLEG): container finished" podID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerID="f53a3c895796c44b94d61775eb726821f040873261a1a8fbdb79f578ca085ec5" exitCode=0 Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.280570 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54d56910-d4b7-45b1-8699-5af7eaa29b96","Type":"ContainerDied","Data":"f53a3c895796c44b94d61775eb726821f040873261a1a8fbdb79f578ca085ec5"} Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.298474 4676 generic.go:334] "Generic (PLEG): container finished" podID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" containerID="1da12014ab624fa95aa24ddc28c008acbe5db6c02d4d9e96a348452a0b9b9e38" exitCode=0 Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.298536 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" event={"ID":"9b4acc7d-6a5f-4544-b73e-61b9a620c17d","Type":"ContainerDied","Data":"1da12014ab624fa95aa24ddc28c008acbe5db6c02d4d9e96a348452a0b9b9e38"} Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.310568 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a658cfcc-7de1-467d-8195-21f54cfaaf18","Type":"ContainerStarted","Data":"102eb5e37f96d7d93fb2c20897bc25ad810e3aed4efdc7876fc59651cbc8312e"} Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.310633 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a658cfcc-7de1-467d-8195-21f54cfaaf18","Type":"ContainerStarted","Data":"5f5f73538e6873cbcb22a131253b622d3c4660ae4fc5a284910827195368977f"} Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.502971 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-54b7b855f4-s49zw" podUID="c015646b-9361-4b5c-b465-a66d7fc5cc53" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.550582 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.586612 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5d7c68bc86-pdds6" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 00:22:54 crc kubenswrapper[4676]: I0124 00:22:54.871266 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:55 crc kubenswrapper[4676]: I0124 00:22:55.325603 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb66b31d-c165-4930-b633-79707cbf22bb","Type":"ContainerStarted","Data":"617a7bf60c86bd8fb95135f8ae3c47f3775cca4441a612fa0eecdd996ac43e81"} Jan 24 00:22:55 crc kubenswrapper[4676]: I0124 00:22:55.354473 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" event={"ID":"9b4acc7d-6a5f-4544-b73e-61b9a620c17d","Type":"ContainerStarted","Data":"3c1cdcc2ffd783555c7e0583ecb6ac91d2d0c15034969f7f5f6f8dba79f95867"} Jan 24 00:22:55 crc kubenswrapper[4676]: I0124 00:22:55.355531 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:22:55 crc kubenswrapper[4676]: I0124 00:22:55.389304 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" podStartSLOduration=4.389283621 podStartE2EDuration="4.389283621s" podCreationTimestamp="2026-01-24 00:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:55.381286978 +0000 UTC m=+1159.411257979" watchObservedRunningTime="2026-01-24 00:22:55.389283621 +0000 UTC m=+1159.419254622" Jan 24 00:22:55 crc kubenswrapper[4676]: I0124 00:22:55.527626 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-54b7b855f4-s49zw" podUID="c015646b-9361-4b5c-b465-a66d7fc5cc53" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.353542 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.376496 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb66b31d-c165-4930-b633-79707cbf22bb","Type":"ContainerStarted","Data":"2401fa1f3b792936f62c0e8fef2c85d82ca1c01d1b0b3520974286bb7dbc9b75"} Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.379558 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerName="cinder-api-log" containerID="cri-o://102eb5e37f96d7d93fb2c20897bc25ad810e3aed4efdc7876fc59651cbc8312e" gracePeriod=30 Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.379571 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a658cfcc-7de1-467d-8195-21f54cfaaf18","Type":"ContainerStarted","Data":"58af4da8742433d2dc7484bcefd21745ff4c7fa2b61dd06f0a0fc41c705c834a"} Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.379633 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerName="cinder-api" containerID="cri-o://58af4da8742433d2dc7484bcefd21745ff4c7fa2b61dd06f0a0fc41c705c834a" gracePeriod=30 Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.379682 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.448279 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.448261608 podStartE2EDuration="5.448261608s" podCreationTimestamp="2026-01-24 00:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:56.430970242 +0000 UTC m=+1160.460941243" watchObservedRunningTime="2026-01-24 00:22:56.448261608 +0000 UTC m=+1160.478232609" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.483741 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.595564254 podStartE2EDuration="5.483724106s" podCreationTimestamp="2026-01-24 00:22:51 +0000 UTC" firstStartedPulling="2026-01-24 00:22:52.662616006 +0000 UTC m=+1156.692586997" lastFinishedPulling="2026-01-24 00:22:53.550775848 +0000 UTC m=+1157.580746849" observedRunningTime="2026-01-24 00:22:56.478823387 +0000 UTC m=+1160.508794388" watchObservedRunningTime="2026-01-24 00:22:56.483724106 +0000 UTC m=+1160.513695107" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.680927 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.800391 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.926320 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b7b989687-7nkv4"] Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.926549 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b7b989687-7nkv4" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-api" containerID="cri-o://bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724" gracePeriod=30 Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.926769 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b7b989687-7nkv4" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-httpd" containerID="cri-o://b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae" gracePeriod=30 Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.944404 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b7b989687-7nkv4" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": read tcp 10.217.0.2:52290->10.217.0.155:9696: read: connection reset by peer" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.969094 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bd579cfd9-q4npp"] Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.970912 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:56 crc kubenswrapper[4676]: I0124 00:22:56.984104 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bd579cfd9-q4npp"] Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.128834 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-internal-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.128898 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-public-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.128921 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-674zd\" (UniqueName: \"kubernetes.io/projected/6dda8455-6777-4efc-abf3-df547cf58339-kube-api-access-674zd\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.129138 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-config\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.129274 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-ovndb-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.129536 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-httpd-config\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.129585 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-combined-ca-bundle\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.231144 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-config\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.231202 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-ovndb-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.231268 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-httpd-config\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.231285 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-combined-ca-bundle\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.231331 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-internal-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.231361 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-public-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.231390 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-674zd\" (UniqueName: \"kubernetes.io/projected/6dda8455-6777-4efc-abf3-df547cf58339-kube-api-access-674zd\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.238219 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-public-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.238834 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-httpd-config\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.240708 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-combined-ca-bundle\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.241726 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-config\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.242765 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-internal-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.261202 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dda8455-6777-4efc-abf3-df547cf58339-ovndb-tls-certs\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.290610 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-674zd\" (UniqueName: \"kubernetes.io/projected/6dda8455-6777-4efc-abf3-df547cf58339-kube-api-access-674zd\") pod \"neutron-bd579cfd9-q4npp\" (UID: \"6dda8455-6777-4efc-abf3-df547cf58339\") " pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.298866 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.436308 4676 generic.go:334] "Generic (PLEG): container finished" podID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerID="58af4da8742433d2dc7484bcefd21745ff4c7fa2b61dd06f0a0fc41c705c834a" exitCode=0 Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.436571 4676 generic.go:334] "Generic (PLEG): container finished" podID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerID="102eb5e37f96d7d93fb2c20897bc25ad810e3aed4efdc7876fc59651cbc8312e" exitCode=143 Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.436610 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a658cfcc-7de1-467d-8195-21f54cfaaf18","Type":"ContainerDied","Data":"58af4da8742433d2dc7484bcefd21745ff4c7fa2b61dd06f0a0fc41c705c834a"} Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.436636 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a658cfcc-7de1-467d-8195-21f54cfaaf18","Type":"ContainerDied","Data":"102eb5e37f96d7d93fb2c20897bc25ad810e3aed4efdc7876fc59651cbc8312e"} Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.449838 4676 generic.go:334] "Generic (PLEG): container finished" podID="2e3ec968-d892-487a-930a-44c79123d54b" containerID="b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae" exitCode=0 Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.449906 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b989687-7nkv4" event={"ID":"2e3ec968-d892-487a-930a-44c79123d54b","Type":"ContainerDied","Data":"b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae"} Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.856425 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.954983 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a658cfcc-7de1-467d-8195-21f54cfaaf18-logs\") pod \"a658cfcc-7de1-467d-8195-21f54cfaaf18\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.955030 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-scripts\") pod \"a658cfcc-7de1-467d-8195-21f54cfaaf18\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.955066 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbnnh\" (UniqueName: \"kubernetes.io/projected/a658cfcc-7de1-467d-8195-21f54cfaaf18-kube-api-access-hbnnh\") pod \"a658cfcc-7de1-467d-8195-21f54cfaaf18\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.955100 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data-custom\") pod \"a658cfcc-7de1-467d-8195-21f54cfaaf18\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.955150 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data\") pod \"a658cfcc-7de1-467d-8195-21f54cfaaf18\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.955169 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a658cfcc-7de1-467d-8195-21f54cfaaf18-etc-machine-id\") pod \"a658cfcc-7de1-467d-8195-21f54cfaaf18\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.955276 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-combined-ca-bundle\") pod \"a658cfcc-7de1-467d-8195-21f54cfaaf18\" (UID: \"a658cfcc-7de1-467d-8195-21f54cfaaf18\") " Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.955365 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a658cfcc-7de1-467d-8195-21f54cfaaf18-logs" (OuterVolumeSpecName: "logs") pod "a658cfcc-7de1-467d-8195-21f54cfaaf18" (UID: "a658cfcc-7de1-467d-8195-21f54cfaaf18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.955619 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a658cfcc-7de1-467d-8195-21f54cfaaf18-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.956847 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a658cfcc-7de1-467d-8195-21f54cfaaf18-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a658cfcc-7de1-467d-8195-21f54cfaaf18" (UID: "a658cfcc-7de1-467d-8195-21f54cfaaf18"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.964485 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a658cfcc-7de1-467d-8195-21f54cfaaf18" (UID: "a658cfcc-7de1-467d-8195-21f54cfaaf18"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.973540 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a658cfcc-7de1-467d-8195-21f54cfaaf18-kube-api-access-hbnnh" (OuterVolumeSpecName: "kube-api-access-hbnnh") pod "a658cfcc-7de1-467d-8195-21f54cfaaf18" (UID: "a658cfcc-7de1-467d-8195-21f54cfaaf18"). InnerVolumeSpecName "kube-api-access-hbnnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:22:57 crc kubenswrapper[4676]: I0124 00:22:57.993558 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-scripts" (OuterVolumeSpecName: "scripts") pod "a658cfcc-7de1-467d-8195-21f54cfaaf18" (UID: "a658cfcc-7de1-467d-8195-21f54cfaaf18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.009265 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a658cfcc-7de1-467d-8195-21f54cfaaf18" (UID: "a658cfcc-7de1-467d-8195-21f54cfaaf18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.018493 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data" (OuterVolumeSpecName: "config-data") pod "a658cfcc-7de1-467d-8195-21f54cfaaf18" (UID: "a658cfcc-7de1-467d-8195-21f54cfaaf18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.058869 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.058909 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.058922 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbnnh\" (UniqueName: \"kubernetes.io/projected/a658cfcc-7de1-467d-8195-21f54cfaaf18-kube-api-access-hbnnh\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.058934 4676 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.058944 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a658cfcc-7de1-467d-8195-21f54cfaaf18-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.058954 4676 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a658cfcc-7de1-467d-8195-21f54cfaaf18-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.201327 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bd579cfd9-q4npp"] Jan 24 00:22:58 crc kubenswrapper[4676]: W0124 00:22:58.203624 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dda8455_6777_4efc_abf3_df547cf58339.slice/crio-4a8cd5c99d2b112e87c52a3a9734d394370cd142a57740512c9257e387d06dab WatchSource:0}: Error finding container 4a8cd5c99d2b112e87c52a3a9734d394370cd142a57740512c9257e387d06dab: Status 404 returned error can't find the container with id 4a8cd5c99d2b112e87c52a3a9734d394370cd142a57740512c9257e387d06dab Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.216324 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54b7b855f4-s49zw" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.292286 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d7c68bc86-pdds6"] Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.293668 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d7c68bc86-pdds6" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api-log" containerID="cri-o://f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f" gracePeriod=30 Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.293810 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d7c68bc86-pdds6" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api" containerID="cri-o://dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92" gracePeriod=30 Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.475308 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bd579cfd9-q4npp" event={"ID":"6dda8455-6777-4efc-abf3-df547cf58339","Type":"ContainerStarted","Data":"4a8cd5c99d2b112e87c52a3a9734d394370cd142a57740512c9257e387d06dab"} Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.495648 4676 generic.go:334] "Generic (PLEG): container finished" podID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerID="f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f" exitCode=143 Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.495971 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d7c68bc86-pdds6" event={"ID":"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4","Type":"ContainerDied","Data":"f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f"} Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.524268 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.524479 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a658cfcc-7de1-467d-8195-21f54cfaaf18","Type":"ContainerDied","Data":"5f5f73538e6873cbcb22a131253b622d3c4660ae4fc5a284910827195368977f"} Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.524618 4676 scope.go:117] "RemoveContainer" containerID="58af4da8742433d2dc7484bcefd21745ff4c7fa2b61dd06f0a0fc41c705c834a" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.608293 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.620595 4676 scope.go:117] "RemoveContainer" containerID="102eb5e37f96d7d93fb2c20897bc25ad810e3aed4efdc7876fc59651cbc8312e" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.631997 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.671911 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:22:58 crc kubenswrapper[4676]: E0124 00:22:58.672368 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerName="cinder-api-log" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.672434 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerName="cinder-api-log" Jan 24 00:22:58 crc kubenswrapper[4676]: E0124 00:22:58.672446 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerName="cinder-api" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.672453 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerName="cinder-api" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.672707 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerName="cinder-api" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.672746 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" containerName="cinder-api-log" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.674554 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.680653 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.680799 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.688462 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.688970 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.780734 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.781204 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.781291 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-scripts\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.781371 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d38cef52-387b-4633-be75-6dc455ad53c4-logs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.781518 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d38cef52-387b-4633-be75-6dc455ad53c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.781600 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.781710 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.781790 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-config-data\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.781868 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv9kc\" (UniqueName: \"kubernetes.io/projected/d38cef52-387b-4633-be75-6dc455ad53c4-kube-api-access-bv9kc\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.883494 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.883725 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-scripts\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.883901 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d38cef52-387b-4633-be75-6dc455ad53c4-logs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.883971 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d38cef52-387b-4633-be75-6dc455ad53c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.884027 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.884148 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d38cef52-387b-4633-be75-6dc455ad53c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.884234 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.884269 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-config-data\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.884313 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d38cef52-387b-4633-be75-6dc455ad53c4-logs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.884320 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv9kc\" (UniqueName: \"kubernetes.io/projected/d38cef52-387b-4633-be75-6dc455ad53c4-kube-api-access-bv9kc\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.884366 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.890988 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.891019 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-scripts\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.891680 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-config-data\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.892036 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.892731 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.909728 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38cef52-387b-4633-be75-6dc455ad53c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.910692 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv9kc\" (UniqueName: \"kubernetes.io/projected/d38cef52-387b-4633-be75-6dc455ad53c4-kube-api-access-bv9kc\") pod \"cinder-api-0\" (UID: \"d38cef52-387b-4633-be75-6dc455ad53c4\") " pod="openstack/cinder-api-0" Jan 24 00:22:58 crc kubenswrapper[4676]: I0124 00:22:58.964764 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b7b989687-7nkv4" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Jan 24 00:22:59 crc kubenswrapper[4676]: I0124 00:22:59.050037 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 00:22:59 crc kubenswrapper[4676]: I0124 00:22:59.538990 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bd579cfd9-q4npp" event={"ID":"6dda8455-6777-4efc-abf3-df547cf58339","Type":"ContainerStarted","Data":"67be95b90cf93b9933fb1ae2105ea1f08de29440ba90feaccd769f190d427798"} Jan 24 00:22:59 crc kubenswrapper[4676]: I0124 00:22:59.539315 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bd579cfd9-q4npp" event={"ID":"6dda8455-6777-4efc-abf3-df547cf58339","Type":"ContainerStarted","Data":"049dc5462fbdbc52cc39195d99e7a055dad132ed0888c1e53388800217cd3f2b"} Jan 24 00:22:59 crc kubenswrapper[4676]: I0124 00:22:59.540503 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:22:59 crc kubenswrapper[4676]: I0124 00:22:59.572741 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bd579cfd9-q4npp" podStartSLOduration=3.572727321 podStartE2EDuration="3.572727321s" podCreationTimestamp="2026-01-24 00:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:22:59.566566303 +0000 UTC m=+1163.596537304" watchObservedRunningTime="2026-01-24 00:22:59.572727321 +0000 UTC m=+1163.602698322" Jan 24 00:22:59 crc kubenswrapper[4676]: I0124 00:22:59.735675 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.172006 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.172128 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.172700 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"9b6a2140f368edfea407714ce9a9afb7bd7057997de5cfab1861b7a16523894e"} pod="openstack/horizon-bf988b4bd-ls7hp" containerMessage="Container horizon failed startup probe, will be restarted" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.172733 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" containerID="cri-o://9b6a2140f368edfea407714ce9a9afb7bd7057997de5cfab1861b7a16523894e" gracePeriod=30 Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.271492 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a658cfcc-7de1-467d-8195-21f54cfaaf18" path="/var/lib/kubelet/pods/a658cfcc-7de1-467d-8195-21f54cfaaf18/volumes" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.322532 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f876ddf46-fs7qv" podUID="ac7dce6b-3bd9-4ad9-9485-83d9384b8bad" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.322609 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.323321 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"c8c367e4ea3e593fa82e51f67b7f8371ac3844af82364461f69e4513b638a905"} pod="openstack/horizon-f876ddf46-fs7qv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.323352 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f876ddf46-fs7qv" podUID="ac7dce6b-3bd9-4ad9-9485-83d9384b8bad" containerName="horizon" containerID="cri-o://c8c367e4ea3e593fa82e51f67b7f8371ac3844af82364461f69e4513b638a905" gracePeriod=30 Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.525948 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.556634 4676 generic.go:334] "Generic (PLEG): container finished" podID="2e3ec968-d892-487a-930a-44c79123d54b" containerID="bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724" exitCode=0 Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.556704 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7b989687-7nkv4" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.556737 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b989687-7nkv4" event={"ID":"2e3ec968-d892-487a-930a-44c79123d54b","Type":"ContainerDied","Data":"bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724"} Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.556787 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7b989687-7nkv4" event={"ID":"2e3ec968-d892-487a-930a-44c79123d54b","Type":"ContainerDied","Data":"59d763231e986383c1ba6301030139cf376841912045d87d7f5c53b19c30ff2a"} Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.556809 4676 scope.go:117] "RemoveContainer" containerID="b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.569255 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d38cef52-387b-4633-be75-6dc455ad53c4","Type":"ContainerStarted","Data":"abfd93d434887a511032777d398f6330f8b3744fee9ab7b6582f92d698b400f1"} Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.569285 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d38cef52-387b-4633-be75-6dc455ad53c4","Type":"ContainerStarted","Data":"b1b11c7d08d075296911e4da993464454c371f2ba04a09e772927aeafdf183b7"} Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.618601 4676 scope.go:117] "RemoveContainer" containerID="bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.625626 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-config\") pod \"2e3ec968-d892-487a-930a-44c79123d54b\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.625783 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-ovndb-tls-certs\") pod \"2e3ec968-d892-487a-930a-44c79123d54b\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.625808 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm9l4\" (UniqueName: \"kubernetes.io/projected/2e3ec968-d892-487a-930a-44c79123d54b-kube-api-access-nm9l4\") pod \"2e3ec968-d892-487a-930a-44c79123d54b\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.625854 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-internal-tls-certs\") pod \"2e3ec968-d892-487a-930a-44c79123d54b\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.625895 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-httpd-config\") pod \"2e3ec968-d892-487a-930a-44c79123d54b\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.625943 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-public-tls-certs\") pod \"2e3ec968-d892-487a-930a-44c79123d54b\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.626027 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-combined-ca-bundle\") pod \"2e3ec968-d892-487a-930a-44c79123d54b\" (UID: \"2e3ec968-d892-487a-930a-44c79123d54b\") " Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.645122 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2e3ec968-d892-487a-930a-44c79123d54b" (UID: "2e3ec968-d892-487a-930a-44c79123d54b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.646631 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3ec968-d892-487a-930a-44c79123d54b-kube-api-access-nm9l4" (OuterVolumeSpecName: "kube-api-access-nm9l4") pod "2e3ec968-d892-487a-930a-44c79123d54b" (UID: "2e3ec968-d892-487a-930a-44c79123d54b"). InnerVolumeSpecName "kube-api-access-nm9l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.671208 4676 scope.go:117] "RemoveContainer" containerID="b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae" Jan 24 00:23:00 crc kubenswrapper[4676]: E0124 00:23:00.672877 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae\": container with ID starting with b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae not found: ID does not exist" containerID="b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.672917 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae"} err="failed to get container status \"b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae\": rpc error: code = NotFound desc = could not find container \"b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae\": container with ID starting with b7aee4889e0ac214cfcb4e7b60ee0df9d729146b0da2baea1b44c7b338f2b7ae not found: ID does not exist" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.672940 4676 scope.go:117] "RemoveContainer" containerID="bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724" Jan 24 00:23:00 crc kubenswrapper[4676]: E0124 00:23:00.673417 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724\": container with ID starting with bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724 not found: ID does not exist" containerID="bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.673450 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724"} err="failed to get container status \"bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724\": rpc error: code = NotFound desc = could not find container \"bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724\": container with ID starting with bb52e795a11686953fafadd660026fb29a73517b463940520ebb5925860e8724 not found: ID does not exist" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.723300 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2e3ec968-d892-487a-930a-44c79123d54b" (UID: "2e3ec968-d892-487a-930a-44c79123d54b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.728904 4676 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.728929 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm9l4\" (UniqueName: \"kubernetes.io/projected/2e3ec968-d892-487a-930a-44c79123d54b-kube-api-access-nm9l4\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.728943 4676 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.729038 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e3ec968-d892-487a-930a-44c79123d54b" (UID: "2e3ec968-d892-487a-930a-44c79123d54b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.734742 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2e3ec968-d892-487a-930a-44c79123d54b" (UID: "2e3ec968-d892-487a-930a-44c79123d54b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.750754 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2e3ec968-d892-487a-930a-44c79123d54b" (UID: "2e3ec968-d892-487a-930a-44c79123d54b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.773477 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-config" (OuterVolumeSpecName: "config") pod "2e3ec968-d892-487a-930a-44c79123d54b" (UID: "2e3ec968-d892-487a-930a-44c79123d54b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.830899 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.830928 4676 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.830940 4676 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.830949 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3ec968-d892-487a-930a-44c79123d54b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.892509 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b7b989687-7nkv4"] Jan 24 00:23:00 crc kubenswrapper[4676]: I0124 00:23:00.899954 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b7b989687-7nkv4"] Jan 24 00:23:01 crc kubenswrapper[4676]: I0124 00:23:01.579941 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d38cef52-387b-4633-be75-6dc455ad53c4","Type":"ContainerStarted","Data":"c6e0b44a4a2847f30f75aa6d61b861aa105c8c4d85e6a59521835b3f3e845693"} Jan 24 00:23:01 crc kubenswrapper[4676]: I0124 00:23:01.605314 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.605296446 podStartE2EDuration="3.605296446s" podCreationTimestamp="2026-01-24 00:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:01.603633025 +0000 UTC m=+1165.633604026" watchObservedRunningTime="2026-01-24 00:23:01.605296446 +0000 UTC m=+1165.635267457" Jan 24 00:23:01 crc kubenswrapper[4676]: I0124 00:23:01.923519 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:23:01 crc kubenswrapper[4676]: I0124 00:23:01.991957 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xg9nt"] Jan 24 00:23:01 crc kubenswrapper[4676]: I0124 00:23:01.992573 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" podUID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerName="dnsmasq-dns" containerID="cri-o://e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b" gracePeriod=10 Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.206359 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.227338 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.267585 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3ec968-d892-487a-930a-44c79123d54b" path="/var/lib/kubelet/pods/2e3ec968-d892-487a-930a-44c79123d54b/volumes" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.283347 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-combined-ca-bundle\") pod \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.283474 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-logs\") pod \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.283497 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data-custom\") pod \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.283536 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data\") pod \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.283620 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfjfx\" (UniqueName: \"kubernetes.io/projected/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-kube-api-access-bfjfx\") pod \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\" (UID: \"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.284204 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-logs" (OuterVolumeSpecName: "logs") pod "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" (UID: "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.285194 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.292918 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" (UID: "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.300057 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-kube-api-access-bfjfx" (OuterVolumeSpecName: "kube-api-access-bfjfx") pod "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" (UID: "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4"). InnerVolumeSpecName "kube-api-access-bfjfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.340228 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" (UID: "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.376797 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data" (OuterVolumeSpecName: "config-data") pod "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" (UID: "6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.387415 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.387572 4676 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.387620 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.387630 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfjfx\" (UniqueName: \"kubernetes.io/projected/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-kube-api-access-bfjfx\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.387640 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.573632 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.591042 4676 generic.go:334] "Generic (PLEG): container finished" podID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerID="dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92" exitCode=0 Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.591100 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d7c68bc86-pdds6" event={"ID":"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4","Type":"ContainerDied","Data":"dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92"} Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.591126 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d7c68bc86-pdds6" event={"ID":"6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4","Type":"ContainerDied","Data":"779fb9c258d57c3a4a4641f3663256b4984fdbbbea094e97da3fbf8189a0afcc"} Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.591143 4676 scope.go:117] "RemoveContainer" containerID="dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.591248 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d7c68bc86-pdds6" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.613454 4676 generic.go:334] "Generic (PLEG): container finished" podID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerID="e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b" exitCode=0 Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.613537 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" event={"ID":"e1f791cc-7690-4f80-9007-a14fcca8a632","Type":"ContainerDied","Data":"e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b"} Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.613602 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.613609 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" event={"ID":"e1f791cc-7690-4f80-9007-a14fcca8a632","Type":"ContainerDied","Data":"2612852ab10c2bc7fb166122d80d7317364b784205eafc69c7981f567d09ca8e"} Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.614153 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.614273 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" containerName="cinder-scheduler" containerID="cri-o://617a7bf60c86bd8fb95135f8ae3c47f3775cca4441a612fa0eecdd996ac43e81" gracePeriod=30 Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.614422 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" containerName="probe" containerID="cri-o://2401fa1f3b792936f62c0e8fef2c85d82ca1c01d1b0b3520974286bb7dbc9b75" gracePeriod=30 Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.652496 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d7c68bc86-pdds6"] Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.658431 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d7c68bc86-pdds6"] Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.661399 4676 scope.go:117] "RemoveContainer" containerID="f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.685589 4676 scope.go:117] "RemoveContainer" containerID="dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92" Jan 24 00:23:02 crc kubenswrapper[4676]: E0124 00:23:02.685952 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92\": container with ID starting with dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92 not found: ID does not exist" containerID="dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.685981 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92"} err="failed to get container status \"dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92\": rpc error: code = NotFound desc = could not find container \"dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92\": container with ID starting with dee85e5f9b90c28514a6fe5264e7105a66d2041ede87770d395b2f20c4c19a92 not found: ID does not exist" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.686004 4676 scope.go:117] "RemoveContainer" containerID="f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f" Jan 24 00:23:02 crc kubenswrapper[4676]: E0124 00:23:02.686190 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f\": container with ID starting with f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f not found: ID does not exist" containerID="f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.686215 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f"} err="failed to get container status \"f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f\": rpc error: code = NotFound desc = could not find container \"f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f\": container with ID starting with f6075d82d7703b6db2b90cc384faadb0429bfb79747fbdd96d4361dda203ec6f not found: ID does not exist" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.686231 4676 scope.go:117] "RemoveContainer" containerID="e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.692566 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-swift-storage-0\") pod \"e1f791cc-7690-4f80-9007-a14fcca8a632\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.692615 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-nb\") pod \"e1f791cc-7690-4f80-9007-a14fcca8a632\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.692794 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-sb\") pod \"e1f791cc-7690-4f80-9007-a14fcca8a632\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.692844 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j5s9\" (UniqueName: \"kubernetes.io/projected/e1f791cc-7690-4f80-9007-a14fcca8a632-kube-api-access-9j5s9\") pod \"e1f791cc-7690-4f80-9007-a14fcca8a632\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.692904 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-config\") pod \"e1f791cc-7690-4f80-9007-a14fcca8a632\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.692966 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-svc\") pod \"e1f791cc-7690-4f80-9007-a14fcca8a632\" (UID: \"e1f791cc-7690-4f80-9007-a14fcca8a632\") " Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.705339 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f791cc-7690-4f80-9007-a14fcca8a632-kube-api-access-9j5s9" (OuterVolumeSpecName: "kube-api-access-9j5s9") pod "e1f791cc-7690-4f80-9007-a14fcca8a632" (UID: "e1f791cc-7690-4f80-9007-a14fcca8a632"). InnerVolumeSpecName "kube-api-access-9j5s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.723587 4676 scope.go:117] "RemoveContainer" containerID="cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.761012 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1f791cc-7690-4f80-9007-a14fcca8a632" (UID: "e1f791cc-7690-4f80-9007-a14fcca8a632"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.761104 4676 scope.go:117] "RemoveContainer" containerID="e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b" Jan 24 00:23:02 crc kubenswrapper[4676]: E0124 00:23:02.761846 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b\": container with ID starting with e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b not found: ID does not exist" containerID="e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.761898 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b"} err="failed to get container status \"e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b\": rpc error: code = NotFound desc = could not find container \"e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b\": container with ID starting with e2b72fc7b5df059c0e36f49a0b10a80c4bc4c2798a40f7961418e5eff05aa92b not found: ID does not exist" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.761932 4676 scope.go:117] "RemoveContainer" containerID="cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a" Jan 24 00:23:02 crc kubenswrapper[4676]: E0124 00:23:02.762241 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a\": container with ID starting with cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a not found: ID does not exist" containerID="cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.762273 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a"} err="failed to get container status \"cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a\": rpc error: code = NotFound desc = could not find container \"cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a\": container with ID starting with cfab6422e88eeccb3646e027bd8862c4d0fb49c299978a170241c30c578c119a not found: ID does not exist" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.768564 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-config" (OuterVolumeSpecName: "config") pod "e1f791cc-7690-4f80-9007-a14fcca8a632" (UID: "e1f791cc-7690-4f80-9007-a14fcca8a632"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.785747 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e1f791cc-7690-4f80-9007-a14fcca8a632" (UID: "e1f791cc-7690-4f80-9007-a14fcca8a632"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.791575 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1f791cc-7690-4f80-9007-a14fcca8a632" (UID: "e1f791cc-7690-4f80-9007-a14fcca8a632"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.791857 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1f791cc-7690-4f80-9007-a14fcca8a632" (UID: "e1f791cc-7690-4f80-9007-a14fcca8a632"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.795279 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.795300 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j5s9\" (UniqueName: \"kubernetes.io/projected/e1f791cc-7690-4f80-9007-a14fcca8a632-kube-api-access-9j5s9\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.795313 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.795325 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.795333 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.795342 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f791cc-7690-4f80-9007-a14fcca8a632-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.833866 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.858390 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-765f6cdf58-5q9v9" Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.959486 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xg9nt"] Jan 24 00:23:02 crc kubenswrapper[4676]: I0124 00:23:02.967501 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xg9nt"] Jan 24 00:23:03 crc kubenswrapper[4676]: I0124 00:23:03.631329 4676 generic.go:334] "Generic (PLEG): container finished" podID="fb66b31d-c165-4930-b633-79707cbf22bb" containerID="2401fa1f3b792936f62c0e8fef2c85d82ca1c01d1b0b3520974286bb7dbc9b75" exitCode=0 Jan 24 00:23:03 crc kubenswrapper[4676]: I0124 00:23:03.631410 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb66b31d-c165-4930-b633-79707cbf22bb","Type":"ContainerDied","Data":"2401fa1f3b792936f62c0e8fef2c85d82ca1c01d1b0b3520974286bb7dbc9b75"} Jan 24 00:23:04 crc kubenswrapper[4676]: I0124 00:23:04.285194 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" path="/var/lib/kubelet/pods/6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4/volumes" Jan 24 00:23:04 crc kubenswrapper[4676]: I0124 00:23:04.286006 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f791cc-7690-4f80-9007-a14fcca8a632" path="/var/lib/kubelet/pods/e1f791cc-7690-4f80-9007-a14fcca8a632/volumes" Jan 24 00:23:04 crc kubenswrapper[4676]: I0124 00:23:04.667832 4676 generic.go:334] "Generic (PLEG): container finished" podID="fb66b31d-c165-4930-b633-79707cbf22bb" containerID="617a7bf60c86bd8fb95135f8ae3c47f3775cca4441a612fa0eecdd996ac43e81" exitCode=0 Jan 24 00:23:04 crc kubenswrapper[4676]: I0124 00:23:04.668143 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb66b31d-c165-4930-b633-79707cbf22bb","Type":"ContainerDied","Data":"617a7bf60c86bd8fb95135f8ae3c47f3775cca4441a612fa0eecdd996ac43e81"} Jan 24 00:23:04 crc kubenswrapper[4676]: I0124 00:23:04.961538 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.036430 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-combined-ca-bundle\") pod \"fb66b31d-c165-4930-b633-79707cbf22bb\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.036836 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vknvh\" (UniqueName: \"kubernetes.io/projected/fb66b31d-c165-4930-b633-79707cbf22bb-kube-api-access-vknvh\") pod \"fb66b31d-c165-4930-b633-79707cbf22bb\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.036856 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb66b31d-c165-4930-b633-79707cbf22bb-etc-machine-id\") pod \"fb66b31d-c165-4930-b633-79707cbf22bb\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.036905 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data-custom\") pod \"fb66b31d-c165-4930-b633-79707cbf22bb\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.036930 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-scripts\") pod \"fb66b31d-c165-4930-b633-79707cbf22bb\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.036952 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data\") pod \"fb66b31d-c165-4930-b633-79707cbf22bb\" (UID: \"fb66b31d-c165-4930-b633-79707cbf22bb\") " Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.037494 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb66b31d-c165-4930-b633-79707cbf22bb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fb66b31d-c165-4930-b633-79707cbf22bb" (UID: "fb66b31d-c165-4930-b633-79707cbf22bb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.054698 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb66b31d-c165-4930-b633-79707cbf22bb-kube-api-access-vknvh" (OuterVolumeSpecName: "kube-api-access-vknvh") pod "fb66b31d-c165-4930-b633-79707cbf22bb" (UID: "fb66b31d-c165-4930-b633-79707cbf22bb"). InnerVolumeSpecName "kube-api-access-vknvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.054896 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb66b31d-c165-4930-b633-79707cbf22bb" (UID: "fb66b31d-c165-4930-b633-79707cbf22bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.061594 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-scripts" (OuterVolumeSpecName: "scripts") pod "fb66b31d-c165-4930-b633-79707cbf22bb" (UID: "fb66b31d-c165-4930-b633-79707cbf22bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.103510 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb66b31d-c165-4930-b633-79707cbf22bb" (UID: "fb66b31d-c165-4930-b633-79707cbf22bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.139156 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.139357 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.139687 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vknvh\" (UniqueName: \"kubernetes.io/projected/fb66b31d-c165-4930-b633-79707cbf22bb-kube-api-access-vknvh\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.140074 4676 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb66b31d-c165-4930-b633-79707cbf22bb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.140153 4676 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.159529 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data" (OuterVolumeSpecName: "config-data") pod "fb66b31d-c165-4930-b633-79707cbf22bb" (UID: "fb66b31d-c165-4930-b633-79707cbf22bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.241972 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb66b31d-c165-4930-b633-79707cbf22bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.676819 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb66b31d-c165-4930-b633-79707cbf22bb","Type":"ContainerDied","Data":"f56885c2863c57838bc58c1bd3891e3e64ca0124e390aeca0b9564b1dc1517dd"} Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.676870 4676 scope.go:117] "RemoveContainer" containerID="2401fa1f3b792936f62c0e8fef2c85d82ca1c01d1b0b3520974286bb7dbc9b75" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.676980 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.702368 4676 scope.go:117] "RemoveContainer" containerID="617a7bf60c86bd8fb95135f8ae3c47f3775cca4441a612fa0eecdd996ac43e81" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.721627 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.737065 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.751355 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:23:05 crc kubenswrapper[4676]: E0124 00:23:05.752764 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerName="init" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.752798 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerName="init" Jan 24 00:23:05 crc kubenswrapper[4676]: E0124 00:23:05.752814 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api-log" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.752822 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api-log" Jan 24 00:23:05 crc kubenswrapper[4676]: E0124 00:23:05.752830 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" containerName="probe" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.752836 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" containerName="probe" Jan 24 00:23:05 crc kubenswrapper[4676]: E0124 00:23:05.752851 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerName="dnsmasq-dns" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.752857 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerName="dnsmasq-dns" Jan 24 00:23:05 crc kubenswrapper[4676]: E0124 00:23:05.752867 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.752872 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api" Jan 24 00:23:05 crc kubenswrapper[4676]: E0124 00:23:05.752896 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" containerName="cinder-scheduler" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.752902 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" containerName="cinder-scheduler" Jan 24 00:23:05 crc kubenswrapper[4676]: E0124 00:23:05.752911 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-httpd" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.752916 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-httpd" Jan 24 00:23:05 crc kubenswrapper[4676]: E0124 00:23:05.752930 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-api" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.752936 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-api" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.753084 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-api" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.753097 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api-log" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.753108 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3ec968-d892-487a-930a-44c79123d54b" containerName="neutron-httpd" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.753117 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" containerName="cinder-scheduler" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.753133 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" containerName="probe" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.753142 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerName="dnsmasq-dns" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.753152 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c41d5e8-6a89-452a-bacc-2d7d25cfd6d4" containerName="barbican-api" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.754219 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.759596 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.770597 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.853440 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-264rh\" (UniqueName: \"kubernetes.io/projected/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-kube-api-access-264rh\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.853796 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-scripts\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.853822 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.853861 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-config-data\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.853890 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.854142 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.956117 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-config-data\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.956426 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.956581 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.956770 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-264rh\" (UniqueName: \"kubernetes.io/projected/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-kube-api-access-264rh\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.956864 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-scripts\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.956955 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.956614 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.962356 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.963468 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.964280 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-config-data\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.966914 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-scripts\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:05 crc kubenswrapper[4676]: I0124 00:23:05.993107 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-264rh\" (UniqueName: \"kubernetes.io/projected/629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978-kube-api-access-264rh\") pod \"cinder-scheduler-0\" (UID: \"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978\") " pod="openstack/cinder-scheduler-0" Jan 24 00:23:06 crc kubenswrapper[4676]: I0124 00:23:06.072305 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 00:23:06 crc kubenswrapper[4676]: I0124 00:23:06.270008 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb66b31d-c165-4930-b633-79707cbf22bb" path="/var/lib/kubelet/pods/fb66b31d-c165-4930-b633-79707cbf22bb/volumes" Jan 24 00:23:06 crc kubenswrapper[4676]: I0124 00:23:06.514024 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 00:23:06 crc kubenswrapper[4676]: W0124 00:23:06.519570 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod629a8ba3_3e4e_4fc2_86a4_bb07d2fb4978.slice/crio-5edb5f7d4665b4762958b02febb9c178c176623212b544a1cdff5460e3f9da71 WatchSource:0}: Error finding container 5edb5f7d4665b4762958b02febb9c178c176623212b544a1cdff5460e3f9da71: Status 404 returned error can't find the container with id 5edb5f7d4665b4762958b02febb9c178c176623212b544a1cdff5460e3f9da71 Jan 24 00:23:06 crc kubenswrapper[4676]: I0124 00:23:06.701671 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978","Type":"ContainerStarted","Data":"5edb5f7d4665b4762958b02febb9c178c176623212b544a1cdff5460e3f9da71"} Jan 24 00:23:07 crc kubenswrapper[4676]: I0124 00:23:07.490558 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-848cf88cfc-xg9nt" podUID="e1f791cc-7690-4f80-9007-a14fcca8a632" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Jan 24 00:23:07 crc kubenswrapper[4676]: I0124 00:23:07.522835 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-849b597d57-kw79c" Jan 24 00:23:07 crc kubenswrapper[4676]: I0124 00:23:07.725707 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978","Type":"ContainerStarted","Data":"e3340c74f6f22649c021c7e0f187ac176c4b3ae05fe7d091105168fc7ec6f010"} Jan 24 00:23:08 crc kubenswrapper[4676]: I0124 00:23:08.735676 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978","Type":"ContainerStarted","Data":"34dba549c2590def8f9c0cdb1da4e8f0344bb110720c1f6efbec0f56a8694f4b"} Jan 24 00:23:08 crc kubenswrapper[4676]: I0124 00:23:08.772638 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.772620363 podStartE2EDuration="3.772620363s" podCreationTimestamp="2026-01-24 00:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:08.762882187 +0000 UTC m=+1172.792853188" watchObservedRunningTime="2026-01-24 00:23:08.772620363 +0000 UTC m=+1172.802591354" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.364225 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.364279 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.893486 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.895215 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.898229 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.898397 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-swvxc" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.906578 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.909834 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.938155 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f3fa6e59-785d-4d40-8d73-170552068e43-openstack-config-secret\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.938413 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw4vg\" (UniqueName: \"kubernetes.io/projected/f3fa6e59-785d-4d40-8d73-170552068e43-kube-api-access-nw4vg\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.938545 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fa6e59-785d-4d40-8d73-170552068e43-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:09 crc kubenswrapper[4676]: I0124 00:23:09.938664 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f3fa6e59-785d-4d40-8d73-170552068e43-openstack-config\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.040268 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fa6e59-785d-4d40-8d73-170552068e43-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.040358 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f3fa6e59-785d-4d40-8d73-170552068e43-openstack-config\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.040559 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f3fa6e59-785d-4d40-8d73-170552068e43-openstack-config-secret\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.040603 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw4vg\" (UniqueName: \"kubernetes.io/projected/f3fa6e59-785d-4d40-8d73-170552068e43-kube-api-access-nw4vg\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.041368 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f3fa6e59-785d-4d40-8d73-170552068e43-openstack-config\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.046669 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fa6e59-785d-4d40-8d73-170552068e43-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.060209 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f3fa6e59-785d-4d40-8d73-170552068e43-openstack-config-secret\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.060840 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw4vg\" (UniqueName: \"kubernetes.io/projected/f3fa6e59-785d-4d40-8d73-170552068e43-kube-api-access-nw4vg\") pod \"openstackclient\" (UID: \"f3fa6e59-785d-4d40-8d73-170552068e43\") " pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.224850 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.740841 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 00:23:10 crc kubenswrapper[4676]: I0124 00:23:10.768262 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f3fa6e59-785d-4d40-8d73-170552068e43","Type":"ContainerStarted","Data":"3c334f3f512b309d59d7969a994238af066cd6ca2cd2b166ae3f52a6517d0c1f"} Jan 24 00:23:11 crc kubenswrapper[4676]: I0124 00:23:11.072628 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 24 00:23:11 crc kubenswrapper[4676]: I0124 00:23:11.222911 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.075076 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f47776b4c-v4xb2"] Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.076698 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.077997 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.079938 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.080103 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.090384 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f47776b4c-v4xb2"] Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.117577 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-config-data\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.117632 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b976b9e2-b80e-4626-919d-3bb84f0151e8-etc-swift\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.117685 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b976b9e2-b80e-4626-919d-3bb84f0151e8-run-httpd\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.117729 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-public-tls-certs\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.117769 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b976b9e2-b80e-4626-919d-3bb84f0151e8-log-httpd\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.117790 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-combined-ca-bundle\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.117835 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-internal-tls-certs\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.117855 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5j5t\" (UniqueName: \"kubernetes.io/projected/b976b9e2-b80e-4626-919d-3bb84f0151e8-kube-api-access-f5j5t\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.219432 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b976b9e2-b80e-4626-919d-3bb84f0151e8-log-httpd\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.219739 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-combined-ca-bundle\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.219802 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-internal-tls-certs\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.219827 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5j5t\" (UniqueName: \"kubernetes.io/projected/b976b9e2-b80e-4626-919d-3bb84f0151e8-kube-api-access-f5j5t\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.219846 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-config-data\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.219873 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b976b9e2-b80e-4626-919d-3bb84f0151e8-etc-swift\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.219919 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b976b9e2-b80e-4626-919d-3bb84f0151e8-run-httpd\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.219961 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-public-tls-certs\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.220626 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b976b9e2-b80e-4626-919d-3bb84f0151e8-log-httpd\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.220945 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b976b9e2-b80e-4626-919d-3bb84f0151e8-run-httpd\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.225910 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-config-data\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.232942 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-internal-tls-certs\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.251997 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-public-tls-certs\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.252126 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5j5t\" (UniqueName: \"kubernetes.io/projected/b976b9e2-b80e-4626-919d-3bb84f0151e8-kube-api-access-f5j5t\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.252892 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b976b9e2-b80e-4626-919d-3bb84f0151e8-etc-swift\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.305904 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b976b9e2-b80e-4626-919d-3bb84f0151e8-combined-ca-bundle\") pod \"swift-proxy-f47776b4c-v4xb2\" (UID: \"b976b9e2-b80e-4626-919d-3bb84f0151e8\") " pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.398804 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:14 crc kubenswrapper[4676]: I0124 00:23:14.985519 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f47776b4c-v4xb2"] Jan 24 00:23:14 crc kubenswrapper[4676]: W0124 00:23:14.996475 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb976b9e2_b80e_4626_919d_3bb84f0151e8.slice/crio-f33df89d7230ab46e614c7fd4d27c529da899033af6df22183f7274a665d8bfc WatchSource:0}: Error finding container f33df89d7230ab46e614c7fd4d27c529da899033af6df22183f7274a665d8bfc: Status 404 returned error can't find the container with id f33df89d7230ab46e614c7fd4d27c529da899033af6df22183f7274a665d8bfc Jan 24 00:23:15 crc kubenswrapper[4676]: I0124 00:23:15.820341 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f47776b4c-v4xb2" event={"ID":"b976b9e2-b80e-4626-919d-3bb84f0151e8","Type":"ContainerStarted","Data":"998a1d998a7ec6be502f68d6f41bc73bc392c7fc5d51b74f3156c1e9d3e06f50"} Jan 24 00:23:15 crc kubenswrapper[4676]: I0124 00:23:15.820916 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f47776b4c-v4xb2" event={"ID":"b976b9e2-b80e-4626-919d-3bb84f0151e8","Type":"ContainerStarted","Data":"00d5f9d23144a1151738a7c3d52da137802c935773f92f51e8a47f56cb4e8894"} Jan 24 00:23:15 crc kubenswrapper[4676]: I0124 00:23:15.820927 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f47776b4c-v4xb2" event={"ID":"b976b9e2-b80e-4626-919d-3bb84f0151e8","Type":"ContainerStarted","Data":"f33df89d7230ab46e614c7fd4d27c529da899033af6df22183f7274a665d8bfc"} Jan 24 00:23:16 crc kubenswrapper[4676]: I0124 00:23:16.333981 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 24 00:23:16 crc kubenswrapper[4676]: I0124 00:23:16.826888 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:16 crc kubenswrapper[4676]: I0124 00:23:16.826968 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:16 crc kubenswrapper[4676]: I0124 00:23:16.843798 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f47776b4c-v4xb2" podStartSLOduration=2.8437788680000002 podStartE2EDuration="2.843778868s" podCreationTimestamp="2026-01-24 00:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:16.84186989 +0000 UTC m=+1180.871840891" watchObservedRunningTime="2026-01-24 00:23:16.843778868 +0000 UTC m=+1180.873749869" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.002854 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bpr95"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.005009 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.018052 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bpr95"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.147040 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wl59\" (UniqueName: \"kubernetes.io/projected/ee552380-8d96-4a10-a5b8-2deb2e73b15f-kube-api-access-2wl59\") pod \"nova-api-db-create-bpr95\" (UID: \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\") " pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.147133 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee552380-8d96-4a10-a5b8-2deb2e73b15f-operator-scripts\") pod \"nova-api-db-create-bpr95\" (UID: \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\") " pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.166811 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vtxzm"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.167889 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.190348 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vtxzm"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.248393 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee552380-8d96-4a10-a5b8-2deb2e73b15f-operator-scripts\") pod \"nova-api-db-create-bpr95\" (UID: \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\") " pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.248507 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wl59\" (UniqueName: \"kubernetes.io/projected/ee552380-8d96-4a10-a5b8-2deb2e73b15f-kube-api-access-2wl59\") pod \"nova-api-db-create-bpr95\" (UID: \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\") " pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.249467 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee552380-8d96-4a10-a5b8-2deb2e73b15f-operator-scripts\") pod \"nova-api-db-create-bpr95\" (UID: \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\") " pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.293602 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a963-account-create-update-jx4g6"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.294687 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.297714 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a963-account-create-update-jx4g6"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.300709 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.304202 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wl59\" (UniqueName: \"kubernetes.io/projected/ee552380-8d96-4a10-a5b8-2deb2e73b15f-kube-api-access-2wl59\") pod \"nova-api-db-create-bpr95\" (UID: \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\") " pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.328020 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.334419 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-b7wzs"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.335576 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.356025 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727rf\" (UniqueName: \"kubernetes.io/projected/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-kube-api-access-727rf\") pod \"nova-cell0-db-create-vtxzm\" (UID: \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\") " pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.356160 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-operator-scripts\") pod \"nova-cell0-db-create-vtxzm\" (UID: \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\") " pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.374747 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-b7wzs"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.459050 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-operator-scripts\") pod \"nova-cell0-db-create-vtxzm\" (UID: \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\") " pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.471782 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nljvc\" (UniqueName: \"kubernetes.io/projected/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-kube-api-access-nljvc\") pod \"nova-api-a963-account-create-update-jx4g6\" (UID: \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\") " pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.473194 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-operator-scripts\") pod \"nova-cell1-db-create-b7wzs\" (UID: \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\") " pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.473337 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfkb7\" (UniqueName: \"kubernetes.io/projected/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-kube-api-access-cfkb7\") pod \"nova-cell1-db-create-b7wzs\" (UID: \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\") " pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.473451 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-operator-scripts\") pod \"nova-api-a963-account-create-update-jx4g6\" (UID: \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\") " pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.473590 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-727rf\" (UniqueName: \"kubernetes.io/projected/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-kube-api-access-727rf\") pod \"nova-cell0-db-create-vtxzm\" (UID: \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\") " pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.463786 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-operator-scripts\") pod \"nova-cell0-db-create-vtxzm\" (UID: \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\") " pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.519023 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-727rf\" (UniqueName: \"kubernetes.io/projected/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-kube-api-access-727rf\") pod \"nova-cell0-db-create-vtxzm\" (UID: \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\") " pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.578209 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-operator-scripts\") pod \"nova-api-a963-account-create-update-jx4g6\" (UID: \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\") " pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.578345 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nljvc\" (UniqueName: \"kubernetes.io/projected/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-kube-api-access-nljvc\") pod \"nova-api-a963-account-create-update-jx4g6\" (UID: \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\") " pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.578396 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-operator-scripts\") pod \"nova-cell1-db-create-b7wzs\" (UID: \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\") " pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.578432 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfkb7\" (UniqueName: \"kubernetes.io/projected/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-kube-api-access-cfkb7\") pod \"nova-cell1-db-create-b7wzs\" (UID: \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\") " pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.579243 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-operator-scripts\") pod \"nova-api-a963-account-create-update-jx4g6\" (UID: \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\") " pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.579849 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-operator-scripts\") pod \"nova-cell1-db-create-b7wzs\" (UID: \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\") " pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.624126 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-95bf-account-create-update-7cvdb"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.625135 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.625889 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nljvc\" (UniqueName: \"kubernetes.io/projected/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-kube-api-access-nljvc\") pod \"nova-api-a963-account-create-update-jx4g6\" (UID: \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\") " pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.631081 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.638100 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfkb7\" (UniqueName: \"kubernetes.io/projected/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-kube-api-access-cfkb7\") pod \"nova-cell1-db-create-b7wzs\" (UID: \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\") " pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.662962 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-95bf-account-create-update-7cvdb"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.678056 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.679526 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a5f8ec9-ee62-4078-846c-291a47631ffb-operator-scripts\") pod \"nova-cell0-95bf-account-create-update-7cvdb\" (UID: \"2a5f8ec9-ee62-4078-846c-291a47631ffb\") " pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.679680 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmmfv\" (UniqueName: \"kubernetes.io/projected/2a5f8ec9-ee62-4078-846c-291a47631ffb-kube-api-access-qmmfv\") pod \"nova-cell0-95bf-account-create-update-7cvdb\" (UID: \"2a5f8ec9-ee62-4078-846c-291a47631ffb\") " pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.697563 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.723903 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5f9c-account-create-update-9tmf6"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.727826 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.732935 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.747015 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5f9c-account-create-update-9tmf6"] Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.781074 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ngb\" (UniqueName: \"kubernetes.io/projected/bd4662bb-5179-4f57-8571-5198dcf69bdb-kube-api-access-g2ngb\") pod \"nova-cell1-5f9c-account-create-update-9tmf6\" (UID: \"bd4662bb-5179-4f57-8571-5198dcf69bdb\") " pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.781199 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmmfv\" (UniqueName: \"kubernetes.io/projected/2a5f8ec9-ee62-4078-846c-291a47631ffb-kube-api-access-qmmfv\") pod \"nova-cell0-95bf-account-create-update-7cvdb\" (UID: \"2a5f8ec9-ee62-4078-846c-291a47631ffb\") " pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.781244 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd4662bb-5179-4f57-8571-5198dcf69bdb-operator-scripts\") pod \"nova-cell1-5f9c-account-create-update-9tmf6\" (UID: \"bd4662bb-5179-4f57-8571-5198dcf69bdb\") " pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.781266 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a5f8ec9-ee62-4078-846c-291a47631ffb-operator-scripts\") pod \"nova-cell0-95bf-account-create-update-7cvdb\" (UID: \"2a5f8ec9-ee62-4078-846c-291a47631ffb\") " pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.781887 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a5f8ec9-ee62-4078-846c-291a47631ffb-operator-scripts\") pod \"nova-cell0-95bf-account-create-update-7cvdb\" (UID: \"2a5f8ec9-ee62-4078-846c-291a47631ffb\") " pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.794780 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.812034 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmmfv\" (UniqueName: \"kubernetes.io/projected/2a5f8ec9-ee62-4078-846c-291a47631ffb-kube-api-access-qmmfv\") pod \"nova-cell0-95bf-account-create-update-7cvdb\" (UID: \"2a5f8ec9-ee62-4078-846c-291a47631ffb\") " pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.882538 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd4662bb-5179-4f57-8571-5198dcf69bdb-operator-scripts\") pod \"nova-cell1-5f9c-account-create-update-9tmf6\" (UID: \"bd4662bb-5179-4f57-8571-5198dcf69bdb\") " pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.882831 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ngb\" (UniqueName: \"kubernetes.io/projected/bd4662bb-5179-4f57-8571-5198dcf69bdb-kube-api-access-g2ngb\") pod \"nova-cell1-5f9c-account-create-update-9tmf6\" (UID: \"bd4662bb-5179-4f57-8571-5198dcf69bdb\") " pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.883246 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd4662bb-5179-4f57-8571-5198dcf69bdb-operator-scripts\") pod \"nova-cell1-5f9c-account-create-update-9tmf6\" (UID: \"bd4662bb-5179-4f57-8571-5198dcf69bdb\") " pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:20 crc kubenswrapper[4676]: I0124 00:23:20.906930 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ngb\" (UniqueName: \"kubernetes.io/projected/bd4662bb-5179-4f57-8571-5198dcf69bdb-kube-api-access-g2ngb\") pod \"nova-cell1-5f9c-account-create-update-9tmf6\" (UID: \"bd4662bb-5179-4f57-8571-5198dcf69bdb\") " pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:21 crc kubenswrapper[4676]: I0124 00:23:21.041087 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:21 crc kubenswrapper[4676]: I0124 00:23:21.048500 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:21 crc kubenswrapper[4676]: I0124 00:23:21.906893 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:23:21 crc kubenswrapper[4676]: I0124 00:23:21.907140 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" containerName="glance-log" containerID="cri-o://b6416b75b9719da0d296633782ac2fc04e1dd20b2e9d142f5025be8c5a5d754d" gracePeriod=30 Jan 24 00:23:21 crc kubenswrapper[4676]: I0124 00:23:21.907274 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" containerName="glance-httpd" containerID="cri-o://e3769b8edac359e6c5b4b6e492f9b2bb7ee70731b0b4c2778ff42c7ddf474ff8" gracePeriod=30 Jan 24 00:23:22 crc kubenswrapper[4676]: I0124 00:23:22.108224 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 24 00:23:22 crc kubenswrapper[4676]: I0124 00:23:22.854521 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:23:22 crc kubenswrapper[4676]: I0124 00:23:22.854805 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="77ec0563-46f1-45b0-892b-352d088f9517" containerName="glance-httpd" containerID="cri-o://1989fb85f0013ac25e4d8fb0c72cdef0a1a08075037f574a25a6701244b8dd50" gracePeriod=30 Jan 24 00:23:22 crc kubenswrapper[4676]: I0124 00:23:22.854752 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="77ec0563-46f1-45b0-892b-352d088f9517" containerName="glance-log" containerID="cri-o://7c7d07ff8323e9ccead03e4d66a066ad64d46892c577bd47375ca06cbd3b2418" gracePeriod=30 Jan 24 00:23:22 crc kubenswrapper[4676]: I0124 00:23:22.878026 4676 generic.go:334] "Generic (PLEG): container finished" podID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerID="1b26cefde45473bb50d47ba50a2ec78ece32f247247ca63eb469045a70b4a673" exitCode=137 Jan 24 00:23:22 crc kubenswrapper[4676]: I0124 00:23:22.878079 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54d56910-d4b7-45b1-8699-5af7eaa29b96","Type":"ContainerDied","Data":"1b26cefde45473bb50d47ba50a2ec78ece32f247247ca63eb469045a70b4a673"} Jan 24 00:23:22 crc kubenswrapper[4676]: I0124 00:23:22.881742 4676 generic.go:334] "Generic (PLEG): container finished" podID="643e6d41-6572-4f21-8651-7f577967bfe8" containerID="b6416b75b9719da0d296633782ac2fc04e1dd20b2e9d142f5025be8c5a5d754d" exitCode=143 Jan 24 00:23:22 crc kubenswrapper[4676]: I0124 00:23:22.881768 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"643e6d41-6572-4f21-8651-7f577967bfe8","Type":"ContainerDied","Data":"b6416b75b9719da0d296633782ac2fc04e1dd20b2e9d142f5025be8c5a5d754d"} Jan 24 00:23:23 crc kubenswrapper[4676]: I0124 00:23:23.896729 4676 generic.go:334] "Generic (PLEG): container finished" podID="77ec0563-46f1-45b0-892b-352d088f9517" containerID="7c7d07ff8323e9ccead03e4d66a066ad64d46892c577bd47375ca06cbd3b2418" exitCode=143 Jan 24 00:23:23 crc kubenswrapper[4676]: I0124 00:23:23.896771 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"77ec0563-46f1-45b0-892b-352d088f9517","Type":"ContainerDied","Data":"7c7d07ff8323e9ccead03e4d66a066ad64d46892c577bd47375ca06cbd3b2418"} Jan 24 00:23:24 crc kubenswrapper[4676]: I0124 00:23:24.415562 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:24 crc kubenswrapper[4676]: I0124 00:23:24.423840 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f47776b4c-v4xb2" Jan 24 00:23:24 crc kubenswrapper[4676]: I0124 00:23:24.717971 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-95bf-account-create-update-7cvdb"] Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.207733 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.387291 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-combined-ca-bundle\") pod \"54d56910-d4b7-45b1-8699-5af7eaa29b96\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.387418 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tljs7\" (UniqueName: \"kubernetes.io/projected/54d56910-d4b7-45b1-8699-5af7eaa29b96-kube-api-access-tljs7\") pod \"54d56910-d4b7-45b1-8699-5af7eaa29b96\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.387457 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-config-data\") pod \"54d56910-d4b7-45b1-8699-5af7eaa29b96\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.387525 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-sg-core-conf-yaml\") pod \"54d56910-d4b7-45b1-8699-5af7eaa29b96\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.387585 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-scripts\") pod \"54d56910-d4b7-45b1-8699-5af7eaa29b96\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.387614 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-log-httpd\") pod \"54d56910-d4b7-45b1-8699-5af7eaa29b96\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.387662 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-run-httpd\") pod \"54d56910-d4b7-45b1-8699-5af7eaa29b96\" (UID: \"54d56910-d4b7-45b1-8699-5af7eaa29b96\") " Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.394368 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "54d56910-d4b7-45b1-8699-5af7eaa29b96" (UID: "54d56910-d4b7-45b1-8699-5af7eaa29b96"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.395775 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "54d56910-d4b7-45b1-8699-5af7eaa29b96" (UID: "54d56910-d4b7-45b1-8699-5af7eaa29b96"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.396771 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d56910-d4b7-45b1-8699-5af7eaa29b96-kube-api-access-tljs7" (OuterVolumeSpecName: "kube-api-access-tljs7") pod "54d56910-d4b7-45b1-8699-5af7eaa29b96" (UID: "54d56910-d4b7-45b1-8699-5af7eaa29b96"). InnerVolumeSpecName "kube-api-access-tljs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.402518 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-scripts" (OuterVolumeSpecName: "scripts") pod "54d56910-d4b7-45b1-8699-5af7eaa29b96" (UID: "54d56910-d4b7-45b1-8699-5af7eaa29b96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.489067 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.489091 4676 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.489099 4676 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d56910-d4b7-45b1-8699-5af7eaa29b96-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.489108 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tljs7\" (UniqueName: \"kubernetes.io/projected/54d56910-d4b7-45b1-8699-5af7eaa29b96-kube-api-access-tljs7\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:25 crc kubenswrapper[4676]: E0124 00:23:25.528140 4676 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod643e6d41_6572_4f21_8651_7f577967bfe8.slice/crio-e3769b8edac359e6c5b4b6e492f9b2bb7ee70731b0b4c2778ff42c7ddf474ff8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod643e6d41_6572_4f21_8651_7f577967bfe8.slice/crio-conmon-e3769b8edac359e6c5b4b6e492f9b2bb7ee70731b0b4c2778ff42c7ddf474ff8.scope\": RecentStats: unable to find data in memory cache]" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.532955 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bpr95"] Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.549010 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vtxzm"] Jan 24 00:23:25 crc kubenswrapper[4676]: W0124 00:23:25.569717 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee552380_8d96_4a10_a5b8_2deb2e73b15f.slice/crio-0b915aba37b3b554522a226fb9001f30eb50da19436816f3096df59862d25de5 WatchSource:0}: Error finding container 0b915aba37b3b554522a226fb9001f30eb50da19436816f3096df59862d25de5: Status 404 returned error can't find the container with id 0b915aba37b3b554522a226fb9001f30eb50da19436816f3096df59862d25de5 Jan 24 00:23:25 crc kubenswrapper[4676]: W0124 00:23:25.587926 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod792cc535_5ed4_4e06_a19c_31ba34c7dfc7.slice/crio-82cc1cf40bad027637ba471334a5a9f3e4778b000f0e88aaab517a858e31e686 WatchSource:0}: Error finding container 82cc1cf40bad027637ba471334a5a9f3e4778b000f0e88aaab517a858e31e686: Status 404 returned error can't find the container with id 82cc1cf40bad027637ba471334a5a9f3e4778b000f0e88aaab517a858e31e686 Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.704242 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "54d56910-d4b7-45b1-8699-5af7eaa29b96" (UID: "54d56910-d4b7-45b1-8699-5af7eaa29b96"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.772026 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-b7wzs"] Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.796705 4676 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.843665 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54d56910-d4b7-45b1-8699-5af7eaa29b96" (UID: "54d56910-d4b7-45b1-8699-5af7eaa29b96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.899408 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.924365 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-config-data" (OuterVolumeSpecName: "config-data") pod "54d56910-d4b7-45b1-8699-5af7eaa29b96" (UID: "54d56910-d4b7-45b1-8699-5af7eaa29b96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.928024 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5f9c-account-create-update-9tmf6"] Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.956566 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" event={"ID":"2a5f8ec9-ee62-4078-846c-291a47631ffb","Type":"ContainerStarted","Data":"27297ea6827aaabd3e76dbca0980a960477f82e37ca6b6bb47dba9c165652738"} Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.956602 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" event={"ID":"2a5f8ec9-ee62-4078-846c-291a47631ffb","Type":"ContainerStarted","Data":"e22d8a5b2f69571d42047489f1547c9fe910b6a190e614d344f927ed2bce6dc6"} Jan 24 00:23:25 crc kubenswrapper[4676]: I0124 00:23:25.977290 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b7wzs" event={"ID":"9cd4cfd3-72ab-45d6-8683-85acb6cadf66","Type":"ContainerStarted","Data":"19a2176ef505fcf59da905e70e48985ee4a9f5afc8429b0b5550316e0527bf2e"} Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.001888 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d56910-d4b7-45b1-8699-5af7eaa29b96-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.010692 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a963-account-create-update-jx4g6"] Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.014598 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f3fa6e59-785d-4d40-8d73-170552068e43","Type":"ContainerStarted","Data":"db60a43a3269c8d0b17e12f1f8dac812f95ded60a81f6b6b73124658d3fff20a"} Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.022051 4676 generic.go:334] "Generic (PLEG): container finished" podID="643e6d41-6572-4f21-8651-7f577967bfe8" containerID="e3769b8edac359e6c5b4b6e492f9b2bb7ee70731b0b4c2778ff42c7ddf474ff8" exitCode=0 Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.022112 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"643e6d41-6572-4f21-8651-7f577967bfe8","Type":"ContainerDied","Data":"e3769b8edac359e6c5b4b6e492f9b2bb7ee70731b0b4c2778ff42c7ddf474ff8"} Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.031240 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"54d56910-d4b7-45b1-8699-5af7eaa29b96","Type":"ContainerDied","Data":"3a171779e2acfaaedcae6bf53bf013bc9cc89a659479afefe0a8349e50f12f73"} Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.031282 4676 scope.go:117] "RemoveContainer" containerID="1b26cefde45473bb50d47ba50a2ec78ece32f247247ca63eb469045a70b4a673" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.031409 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.037078 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" podStartSLOduration=6.03705786 podStartE2EDuration="6.03705786s" podCreationTimestamp="2026-01-24 00:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:25.987735099 +0000 UTC m=+1190.017706100" watchObservedRunningTime="2026-01-24 00:23:26.03705786 +0000 UTC m=+1190.067028861" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.054699 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vtxzm" event={"ID":"792cc535-5ed4-4e06-a19c-31ba34c7dfc7","Type":"ContainerStarted","Data":"82cc1cf40bad027637ba471334a5a9f3e4778b000f0e88aaab517a858e31e686"} Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.060873 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bpr95" event={"ID":"ee552380-8d96-4a10-a5b8-2deb2e73b15f","Type":"ContainerStarted","Data":"0b915aba37b3b554522a226fb9001f30eb50da19436816f3096df59862d25de5"} Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.123565 4676 scope.go:117] "RemoveContainer" containerID="dae61f7a8060ccdb486fd53e86c1fca33bbb71e0fb9f0d456ad2e0a5a8007cbd" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.135125 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.935849166 podStartE2EDuration="17.135108642s" podCreationTimestamp="2026-01-24 00:23:09 +0000 UTC" firstStartedPulling="2026-01-24 00:23:10.756110546 +0000 UTC m=+1174.786081547" lastFinishedPulling="2026-01-24 00:23:24.955370022 +0000 UTC m=+1188.985341023" observedRunningTime="2026-01-24 00:23:26.034713169 +0000 UTC m=+1190.064684170" watchObservedRunningTime="2026-01-24 00:23:26.135108642 +0000 UTC m=+1190.165079643" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.365296 4676 scope.go:117] "RemoveContainer" containerID="f53a3c895796c44b94d61775eb726821f040873261a1a8fbdb79f578ca085ec5" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.457124 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.519653 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-scripts\") pod \"643e6d41-6572-4f21-8651-7f577967bfe8\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.519718 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blvls\" (UniqueName: \"kubernetes.io/projected/643e6d41-6572-4f21-8651-7f577967bfe8-kube-api-access-blvls\") pod \"643e6d41-6572-4f21-8651-7f577967bfe8\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.519880 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-httpd-run\") pod \"643e6d41-6572-4f21-8651-7f577967bfe8\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.519934 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-combined-ca-bundle\") pod \"643e6d41-6572-4f21-8651-7f577967bfe8\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.519975 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"643e6d41-6572-4f21-8651-7f577967bfe8\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.520004 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-logs\") pod \"643e6d41-6572-4f21-8651-7f577967bfe8\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.520023 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-public-tls-certs\") pod \"643e6d41-6572-4f21-8651-7f577967bfe8\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.520060 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-config-data\") pod \"643e6d41-6572-4f21-8651-7f577967bfe8\" (UID: \"643e6d41-6572-4f21-8651-7f577967bfe8\") " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.522042 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-logs" (OuterVolumeSpecName: "logs") pod "643e6d41-6572-4f21-8651-7f577967bfe8" (UID: "643e6d41-6572-4f21-8651-7f577967bfe8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.535433 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "643e6d41-6572-4f21-8651-7f577967bfe8" (UID: "643e6d41-6572-4f21-8651-7f577967bfe8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.545174 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-scripts" (OuterVolumeSpecName: "scripts") pod "643e6d41-6572-4f21-8651-7f577967bfe8" (UID: "643e6d41-6572-4f21-8651-7f577967bfe8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.552977 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "643e6d41-6572-4f21-8651-7f577967bfe8" (UID: "643e6d41-6572-4f21-8651-7f577967bfe8"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.621970 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.622288 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.622297 4676 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/643e6d41-6572-4f21-8651-7f577967bfe8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.622325 4676 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.667037 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643e6d41-6572-4f21-8651-7f577967bfe8-kube-api-access-blvls" (OuterVolumeSpecName: "kube-api-access-blvls") pod "643e6d41-6572-4f21-8651-7f577967bfe8" (UID: "643e6d41-6572-4f21-8651-7f577967bfe8"). InnerVolumeSpecName "kube-api-access-blvls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.723636 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blvls\" (UniqueName: \"kubernetes.io/projected/643e6d41-6572-4f21-8651-7f577967bfe8-kube-api-access-blvls\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.758970 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "643e6d41-6572-4f21-8651-7f577967bfe8" (UID: "643e6d41-6572-4f21-8651-7f577967bfe8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.766702 4676 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.792932 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "643e6d41-6572-4f21-8651-7f577967bfe8" (UID: "643e6d41-6572-4f21-8651-7f577967bfe8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.835267 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.835291 4676 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.835299 4676 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:26 crc kubenswrapper[4676]: I0124 00:23:26.998382 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-config-data" (OuterVolumeSpecName: "config-data") pod "643e6d41-6572-4f21-8651-7f577967bfe8" (UID: "643e6d41-6572-4f21-8651-7f577967bfe8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.038957 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643e6d41-6572-4f21-8651-7f577967bfe8-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.081860 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a963-account-create-update-jx4g6" event={"ID":"7d47faf0-8eaa-474f-8cde-aed6c10f4a05","Type":"ContainerStarted","Data":"cc3b8c014c9450ef0997c6ac2396003671e4bb4fb9c8343ce0debdf9d3796b15"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.081929 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a963-account-create-update-jx4g6" event={"ID":"7d47faf0-8eaa-474f-8cde-aed6c10f4a05","Type":"ContainerStarted","Data":"80f0130f0adbb814673f5191c8d75bf65643388a6bb0829cd96603851a909904"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.083341 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b7wzs" event={"ID":"9cd4cfd3-72ab-45d6-8683-85acb6cadf66","Type":"ContainerStarted","Data":"848f0e8f0de2d0ecec1629bf16e2fb50f2be26fc04335b7c112ae212cbf9035a"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.104598 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"643e6d41-6572-4f21-8651-7f577967bfe8","Type":"ContainerDied","Data":"7eaab757b11ce1880975cf769167065cfd261eeb0ede6ebf82f2d90ebe083765"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.104650 4676 scope.go:117] "RemoveContainer" containerID="e3769b8edac359e6c5b4b6e492f9b2bb7ee70731b0b4c2778ff42c7ddf474ff8" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.104615 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.119220 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" event={"ID":"bd4662bb-5179-4f57-8571-5198dcf69bdb","Type":"ContainerStarted","Data":"0b76bccfcacd7fb1c879fa9b8c0e01b681d270afe951e54be9117c071c170135"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.119270 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" event={"ID":"bd4662bb-5179-4f57-8571-5198dcf69bdb","Type":"ContainerStarted","Data":"e0549f129beec80d104c722afe731613b69d0b4983a48a2b5cd879221ad0c238"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.136235 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vtxzm" event={"ID":"792cc535-5ed4-4e06-a19c-31ba34c7dfc7","Type":"ContainerStarted","Data":"3ee83666354e8c207624c3af1d827bd9c61811a7455e005083ff4e2e34ef0d81"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.159721 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bpr95" event={"ID":"ee552380-8d96-4a10-a5b8-2deb2e73b15f","Type":"ContainerStarted","Data":"e7fc18fa4adecb5d9d62a155b3da665c4c438818bc9b1324ec05947c45e02af0"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.160528 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-b7wzs" podStartSLOduration=7.160512206 podStartE2EDuration="7.160512206s" podCreationTimestamp="2026-01-24 00:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:27.125721279 +0000 UTC m=+1191.155692290" watchObservedRunningTime="2026-01-24 00:23:27.160512206 +0000 UTC m=+1191.190483207" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.161793 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a963-account-create-update-jx4g6" podStartSLOduration=7.161786185 podStartE2EDuration="7.161786185s" podCreationTimestamp="2026-01-24 00:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:27.103794912 +0000 UTC m=+1191.133765913" watchObservedRunningTime="2026-01-24 00:23:27.161786185 +0000 UTC m=+1191.191757186" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.176998 4676 generic.go:334] "Generic (PLEG): container finished" podID="2a5f8ec9-ee62-4078-846c-291a47631ffb" containerID="27297ea6827aaabd3e76dbca0980a960477f82e37ca6b6bb47dba9c165652738" exitCode=0 Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.177101 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" event={"ID":"2a5f8ec9-ee62-4078-846c-291a47631ffb","Type":"ContainerDied","Data":"27297ea6827aaabd3e76dbca0980a960477f82e37ca6b6bb47dba9c165652738"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.196886 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" podStartSLOduration=7.196867792 podStartE2EDuration="7.196867792s" podCreationTimestamp="2026-01-24 00:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:27.159686442 +0000 UTC m=+1191.189657443" watchObservedRunningTime="2026-01-24 00:23:27.196867792 +0000 UTC m=+1191.226838793" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.199814 4676 generic.go:334] "Generic (PLEG): container finished" podID="77ec0563-46f1-45b0-892b-352d088f9517" containerID="1989fb85f0013ac25e4d8fb0c72cdef0a1a08075037f574a25a6701244b8dd50" exitCode=0 Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.200614 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"77ec0563-46f1-45b0-892b-352d088f9517","Type":"ContainerDied","Data":"1989fb85f0013ac25e4d8fb0c72cdef0a1a08075037f574a25a6701244b8dd50"} Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.206923 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.207855 4676 scope.go:117] "RemoveContainer" containerID="b6416b75b9719da0d296633782ac2fc04e1dd20b2e9d142f5025be8c5a5d754d" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.214488 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-bpr95" podStartSLOduration=8.214471487 podStartE2EDuration="8.214471487s" podCreationTimestamp="2026-01-24 00:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:27.190161778 +0000 UTC m=+1191.220132779" watchObservedRunningTime="2026-01-24 00:23:27.214471487 +0000 UTC m=+1191.244442488" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.236750 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-vtxzm" podStartSLOduration=7.236734585 podStartE2EDuration="7.236734585s" podCreationTimestamp="2026-01-24 00:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:27.204887947 +0000 UTC m=+1191.234858948" watchObservedRunningTime="2026-01-24 00:23:27.236734585 +0000 UTC m=+1191.266705576" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.251037 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-internal-tls-certs\") pod \"77ec0563-46f1-45b0-892b-352d088f9517\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.251092 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-httpd-run\") pod \"77ec0563-46f1-45b0-892b-352d088f9517\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.251110 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-config-data\") pod \"77ec0563-46f1-45b0-892b-352d088f9517\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.251127 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"77ec0563-46f1-45b0-892b-352d088f9517\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.251143 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-logs\") pod \"77ec0563-46f1-45b0-892b-352d088f9517\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.251159 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-scripts\") pod \"77ec0563-46f1-45b0-892b-352d088f9517\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.251233 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-combined-ca-bundle\") pod \"77ec0563-46f1-45b0-892b-352d088f9517\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.251256 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49fqw\" (UniqueName: \"kubernetes.io/projected/77ec0563-46f1-45b0-892b-352d088f9517-kube-api-access-49fqw\") pod \"77ec0563-46f1-45b0-892b-352d088f9517\" (UID: \"77ec0563-46f1-45b0-892b-352d088f9517\") " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.252599 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-logs" (OuterVolumeSpecName: "logs") pod "77ec0563-46f1-45b0-892b-352d088f9517" (UID: "77ec0563-46f1-45b0-892b-352d088f9517"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.253083 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.262497 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "77ec0563-46f1-45b0-892b-352d088f9517" (UID: "77ec0563-46f1-45b0-892b-352d088f9517"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.291982 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-scripts" (OuterVolumeSpecName: "scripts") pod "77ec0563-46f1-45b0-892b-352d088f9517" (UID: "77ec0563-46f1-45b0-892b-352d088f9517"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.292084 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "77ec0563-46f1-45b0-892b-352d088f9517" (UID: "77ec0563-46f1-45b0-892b-352d088f9517"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.324182 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ec0563-46f1-45b0-892b-352d088f9517-kube-api-access-49fqw" (OuterVolumeSpecName: "kube-api-access-49fqw") pod "77ec0563-46f1-45b0-892b-352d088f9517" (UID: "77ec0563-46f1-45b0-892b-352d088f9517"). InnerVolumeSpecName "kube-api-access-49fqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.354250 4676 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ec0563-46f1-45b0-892b-352d088f9517-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.354280 4676 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.354289 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.354298 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49fqw\" (UniqueName: \"kubernetes.io/projected/77ec0563-46f1-45b0-892b-352d088f9517-kube-api-access-49fqw\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.362831 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.363232 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "77ec0563-46f1-45b0-892b-352d088f9517" (UID: "77ec0563-46f1-45b0-892b-352d088f9517"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.373654 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.387674 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bd579cfd9-q4npp" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.391740 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-config-data" (OuterVolumeSpecName: "config-data") pod "77ec0563-46f1-45b0-892b-352d088f9517" (UID: "77ec0563-46f1-45b0-892b-352d088f9517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.398501 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:23:27 crc kubenswrapper[4676]: E0124 00:23:27.399006 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="ceilometer-notification-agent" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.399022 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="ceilometer-notification-agent" Jan 24 00:23:27 crc kubenswrapper[4676]: E0124 00:23:27.399047 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" containerName="glance-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.399055 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" containerName="glance-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: E0124 00:23:27.399071 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ec0563-46f1-45b0-892b-352d088f9517" containerName="glance-log" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.399079 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ec0563-46f1-45b0-892b-352d088f9517" containerName="glance-log" Jan 24 00:23:27 crc kubenswrapper[4676]: E0124 00:23:27.399089 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="proxy-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.399097 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="proxy-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: E0124 00:23:27.399116 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" containerName="glance-log" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.399123 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" containerName="glance-log" Jan 24 00:23:27 crc kubenswrapper[4676]: E0124 00:23:27.399134 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ec0563-46f1-45b0-892b-352d088f9517" containerName="glance-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.399142 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ec0563-46f1-45b0-892b-352d088f9517" containerName="glance-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: E0124 00:23:27.399155 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="sg-core" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.399162 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="sg-core" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.399418 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="sg-core" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.400123 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="ceilometer-notification-agent" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.400155 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ec0563-46f1-45b0-892b-352d088f9517" containerName="glance-log" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.400174 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" containerName="glance-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.400182 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" containerName="glance-log" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.400192 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" containerName="proxy-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.400211 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ec0563-46f1-45b0-892b-352d088f9517" containerName="glance-httpd" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.401320 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77ec0563-46f1-45b0-892b-352d088f9517" (UID: "77ec0563-46f1-45b0-892b-352d088f9517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.402355 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.403797 4676 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.405616 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.410070 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.416668 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.466254 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.466295 4676 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.466316 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.466329 4676 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ec0563-46f1-45b0-892b-352d088f9517-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.536498 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-756b7b5794-xhgs6"] Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.537478 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-756b7b5794-xhgs6" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerName="neutron-httpd" containerID="cri-o://ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a" gracePeriod=30 Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.537009 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-756b7b5794-xhgs6" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerName="neutron-api" containerID="cri-o://90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e" gracePeriod=30 Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.569471 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.569532 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6cef518-8385-4315-83d5-f46a6144d5a0-logs\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.569683 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4ws\" (UniqueName: \"kubernetes.io/projected/a6cef518-8385-4315-83d5-f46a6144d5a0-kube-api-access-sr4ws\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.569743 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.569765 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6cef518-8385-4315-83d5-f46a6144d5a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.569805 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.569846 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.569870 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.686684 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.686735 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.686775 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.686790 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6cef518-8385-4315-83d5-f46a6144d5a0-logs\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.686873 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr4ws\" (UniqueName: \"kubernetes.io/projected/a6cef518-8385-4315-83d5-f46a6144d5a0-kube-api-access-sr4ws\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.686925 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.686943 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6cef518-8385-4315-83d5-f46a6144d5a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.686975 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.689959 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.690670 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6cef518-8385-4315-83d5-f46a6144d5a0-logs\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.691082 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6cef518-8385-4315-83d5-f46a6144d5a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.691234 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.693597 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.710372 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.716365 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr4ws\" (UniqueName: \"kubernetes.io/projected/a6cef518-8385-4315-83d5-f46a6144d5a0-kube-api-access-sr4ws\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.717572 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6cef518-8385-4315-83d5-f46a6144d5a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:27 crc kubenswrapper[4676]: I0124 00:23:27.753926 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"a6cef518-8385-4315-83d5-f46a6144d5a0\") " pod="openstack/glance-default-external-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.025679 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.212829 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd4662bb-5179-4f57-8571-5198dcf69bdb" containerID="0b76bccfcacd7fb1c879fa9b8c0e01b681d270afe951e54be9117c071c170135" exitCode=0 Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.213138 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" event={"ID":"bd4662bb-5179-4f57-8571-5198dcf69bdb","Type":"ContainerDied","Data":"0b76bccfcacd7fb1c879fa9b8c0e01b681d270afe951e54be9117c071c170135"} Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.215163 4676 generic.go:334] "Generic (PLEG): container finished" podID="792cc535-5ed4-4e06-a19c-31ba34c7dfc7" containerID="3ee83666354e8c207624c3af1d827bd9c61811a7455e005083ff4e2e34ef0d81" exitCode=0 Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.215201 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vtxzm" event={"ID":"792cc535-5ed4-4e06-a19c-31ba34c7dfc7","Type":"ContainerDied","Data":"3ee83666354e8c207624c3af1d827bd9c61811a7455e005083ff4e2e34ef0d81"} Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.219486 4676 generic.go:334] "Generic (PLEG): container finished" podID="ee552380-8d96-4a10-a5b8-2deb2e73b15f" containerID="e7fc18fa4adecb5d9d62a155b3da665c4c438818bc9b1324ec05947c45e02af0" exitCode=0 Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.219540 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bpr95" event={"ID":"ee552380-8d96-4a10-a5b8-2deb2e73b15f","Type":"ContainerDied","Data":"e7fc18fa4adecb5d9d62a155b3da665c4c438818bc9b1324ec05947c45e02af0"} Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.232688 4676 generic.go:334] "Generic (PLEG): container finished" podID="7d47faf0-8eaa-474f-8cde-aed6c10f4a05" containerID="cc3b8c014c9450ef0997c6ac2396003671e4bb4fb9c8343ce0debdf9d3796b15" exitCode=0 Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.232784 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a963-account-create-update-jx4g6" event={"ID":"7d47faf0-8eaa-474f-8cde-aed6c10f4a05","Type":"ContainerDied","Data":"cc3b8c014c9450ef0997c6ac2396003671e4bb4fb9c8343ce0debdf9d3796b15"} Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.237327 4676 generic.go:334] "Generic (PLEG): container finished" podID="9cd4cfd3-72ab-45d6-8683-85acb6cadf66" containerID="848f0e8f0de2d0ecec1629bf16e2fb50f2be26fc04335b7c112ae212cbf9035a" exitCode=0 Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.237479 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b7wzs" event={"ID":"9cd4cfd3-72ab-45d6-8683-85acb6cadf66","Type":"ContainerDied","Data":"848f0e8f0de2d0ecec1629bf16e2fb50f2be26fc04335b7c112ae212cbf9035a"} Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.249054 4676 generic.go:334] "Generic (PLEG): container finished" podID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerID="ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a" exitCode=0 Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.249109 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756b7b5794-xhgs6" event={"ID":"1d557c78-075a-44f7-a530-860ae3ec8ffd","Type":"ContainerDied","Data":"ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a"} Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.252723 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"77ec0563-46f1-45b0-892b-352d088f9517","Type":"ContainerDied","Data":"07d4384811e1f2313a45aceab532176f3c3d05348e79e5d4955b534eb2e2868c"} Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.252777 4676 scope.go:117] "RemoveContainer" containerID="1989fb85f0013ac25e4d8fb0c72cdef0a1a08075037f574a25a6701244b8dd50" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.252782 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.275226 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643e6d41-6572-4f21-8651-7f577967bfe8" path="/var/lib/kubelet/pods/643e6d41-6572-4f21-8651-7f577967bfe8/volumes" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.319122 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.325621 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.345344 4676 scope.go:117] "RemoveContainer" containerID="7c7d07ff8323e9ccead03e4d66a066ad64d46892c577bd47375ca06cbd3b2418" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.360452 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.361912 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.363614 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.363756 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.377066 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.509452 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/042ec835-b8c1-43be-a19d-d70f76128e26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.509792 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.509812 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.509839 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.509869 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.509888 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25m4\" (UniqueName: \"kubernetes.io/projected/042ec835-b8c1-43be-a19d-d70f76128e26-kube-api-access-h25m4\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.509926 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/042ec835-b8c1-43be-a19d-d70f76128e26-logs\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.509947 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.612448 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.612492 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25m4\" (UniqueName: \"kubernetes.io/projected/042ec835-b8c1-43be-a19d-d70f76128e26-kube-api-access-h25m4\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.612539 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/042ec835-b8c1-43be-a19d-d70f76128e26-logs\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.612560 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.612630 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/042ec835-b8c1-43be-a19d-d70f76128e26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.612669 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.612686 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.612714 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.615745 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/042ec835-b8c1-43be-a19d-d70f76128e26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.616093 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.617067 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/042ec835-b8c1-43be-a19d-d70f76128e26-logs\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.623588 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.623952 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.626389 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.637362 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/042ec835-b8c1-43be-a19d-d70f76128e26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.645831 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25m4\" (UniqueName: \"kubernetes.io/projected/042ec835-b8c1-43be-a19d-d70f76128e26-kube-api-access-h25m4\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.651024 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.671169 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"042ec835-b8c1-43be-a19d-d70f76128e26\") " pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.705699 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.740011 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.920919 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmmfv\" (UniqueName: \"kubernetes.io/projected/2a5f8ec9-ee62-4078-846c-291a47631ffb-kube-api-access-qmmfv\") pod \"2a5f8ec9-ee62-4078-846c-291a47631ffb\" (UID: \"2a5f8ec9-ee62-4078-846c-291a47631ffb\") " Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.921242 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a5f8ec9-ee62-4078-846c-291a47631ffb-operator-scripts\") pod \"2a5f8ec9-ee62-4078-846c-291a47631ffb\" (UID: \"2a5f8ec9-ee62-4078-846c-291a47631ffb\") " Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.925699 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5f8ec9-ee62-4078-846c-291a47631ffb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a5f8ec9-ee62-4078-846c-291a47631ffb" (UID: "2a5f8ec9-ee62-4078-846c-291a47631ffb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:28 crc kubenswrapper[4676]: I0124 00:23:28.956318 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5f8ec9-ee62-4078-846c-291a47631ffb-kube-api-access-qmmfv" (OuterVolumeSpecName: "kube-api-access-qmmfv") pod "2a5f8ec9-ee62-4078-846c-291a47631ffb" (UID: "2a5f8ec9-ee62-4078-846c-291a47631ffb"). InnerVolumeSpecName "kube-api-access-qmmfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.024323 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmmfv\" (UniqueName: \"kubernetes.io/projected/2a5f8ec9-ee62-4078-846c-291a47631ffb-kube-api-access-qmmfv\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.024353 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a5f8ec9-ee62-4078-846c-291a47631ffb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.263356 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" event={"ID":"2a5f8ec9-ee62-4078-846c-291a47631ffb","Type":"ContainerDied","Data":"e22d8a5b2f69571d42047489f1547c9fe910b6a190e614d344f927ed2bce6dc6"} Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.263424 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22d8a5b2f69571d42047489f1547c9fe910b6a190e614d344f927ed2bce6dc6" Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.263368 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-95bf-account-create-update-7cvdb" Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.265952 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6cef518-8385-4315-83d5-f46a6144d5a0","Type":"ContainerStarted","Data":"9b4fae0548ececbcf6b0297fa75fbcf74c89ffcaf56d9df317752603eb9304e3"} Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.356312 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 00:23:29 crc kubenswrapper[4676]: W0124 00:23:29.374932 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod042ec835_b8c1_43be_a19d_d70f76128e26.slice/crio-37a7e8266ce7ce32ef66a6acabe1d965b18237492ec2068eb4e79fcb39cf1908 WatchSource:0}: Error finding container 37a7e8266ce7ce32ef66a6acabe1d965b18237492ec2068eb4e79fcb39cf1908: Status 404 returned error can't find the container with id 37a7e8266ce7ce32ef66a6acabe1d965b18237492ec2068eb4e79fcb39cf1908 Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.863126 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.972740 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nljvc\" (UniqueName: \"kubernetes.io/projected/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-kube-api-access-nljvc\") pod \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\" (UID: \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\") " Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.972922 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-operator-scripts\") pod \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\" (UID: \"7d47faf0-8eaa-474f-8cde-aed6c10f4a05\") " Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.973427 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d47faf0-8eaa-474f-8cde-aed6c10f4a05" (UID: "7d47faf0-8eaa-474f-8cde-aed6c10f4a05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.975064 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:29 crc kubenswrapper[4676]: I0124 00:23:29.976065 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-kube-api-access-nljvc" (OuterVolumeSpecName: "kube-api-access-nljvc") pod "7d47faf0-8eaa-474f-8cde-aed6c10f4a05" (UID: "7d47faf0-8eaa-474f-8cde-aed6c10f4a05"). InnerVolumeSpecName "kube-api-access-nljvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.077330 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nljvc\" (UniqueName: \"kubernetes.io/projected/7d47faf0-8eaa-474f-8cde-aed6c10f4a05-kube-api-access-nljvc\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.196678 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.282146 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-727rf\" (UniqueName: \"kubernetes.io/projected/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-kube-api-access-727rf\") pod \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\" (UID: \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\") " Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.282256 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-operator-scripts\") pod \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\" (UID: \"792cc535-5ed4-4e06-a19c-31ba34c7dfc7\") " Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.284899 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "792cc535-5ed4-4e06-a19c-31ba34c7dfc7" (UID: "792cc535-5ed4-4e06-a19c-31ba34c7dfc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.287767 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.310138 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ec0563-46f1-45b0-892b-352d088f9517" path="/var/lib/kubelet/pods/77ec0563-46f1-45b0-892b-352d088f9517/volumes" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.324762 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-kube-api-access-727rf" (OuterVolumeSpecName: "kube-api-access-727rf") pod "792cc535-5ed4-4e06-a19c-31ba34c7dfc7" (UID: "792cc535-5ed4-4e06-a19c-31ba34c7dfc7"). InnerVolumeSpecName "kube-api-access-727rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.354079 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6cef518-8385-4315-83d5-f46a6144d5a0","Type":"ContainerStarted","Data":"2a5b6c0202ee01285056feddeb0e637c423f5cffcc6053140ac8681ada0216c3"} Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.354111 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"042ec835-b8c1-43be-a19d-d70f76128e26","Type":"ContainerStarted","Data":"37a7e8266ce7ce32ef66a6acabe1d965b18237492ec2068eb4e79fcb39cf1908"} Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.354125 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" event={"ID":"bd4662bb-5179-4f57-8571-5198dcf69bdb","Type":"ContainerDied","Data":"e0549f129beec80d104c722afe731613b69d0b4983a48a2b5cd879221ad0c238"} Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.354138 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0549f129beec80d104c722afe731613b69d0b4983a48a2b5cd879221ad0c238" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.355731 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.359842 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vtxzm" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.360073 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vtxzm" event={"ID":"792cc535-5ed4-4e06-a19c-31ba34c7dfc7","Type":"ContainerDied","Data":"82cc1cf40bad027637ba471334a5a9f3e4778b000f0e88aaab517a858e31e686"} Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.360092 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82cc1cf40bad027637ba471334a5a9f3e4778b000f0e88aaab517a858e31e686" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.375271 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a963-account-create-update-jx4g6" event={"ID":"7d47faf0-8eaa-474f-8cde-aed6c10f4a05","Type":"ContainerDied","Data":"80f0130f0adbb814673f5191c8d75bf65643388a6bb0829cd96603851a909904"} Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.375311 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80f0130f0adbb814673f5191c8d75bf65643388a6bb0829cd96603851a909904" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.375393 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a963-account-create-update-jx4g6" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.389730 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-727rf\" (UniqueName: \"kubernetes.io/projected/792cc535-5ed4-4e06-a19c-31ba34c7dfc7-kube-api-access-727rf\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.398950 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.410895 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.491332 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee552380-8d96-4a10-a5b8-2deb2e73b15f-operator-scripts\") pod \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\" (UID: \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\") " Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.493122 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd4662bb-5179-4f57-8571-5198dcf69bdb-operator-scripts\") pod \"bd4662bb-5179-4f57-8571-5198dcf69bdb\" (UID: \"bd4662bb-5179-4f57-8571-5198dcf69bdb\") " Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.493174 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wl59\" (UniqueName: \"kubernetes.io/projected/ee552380-8d96-4a10-a5b8-2deb2e73b15f-kube-api-access-2wl59\") pod \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\" (UID: \"ee552380-8d96-4a10-a5b8-2deb2e73b15f\") " Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.493209 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfkb7\" (UniqueName: \"kubernetes.io/projected/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-kube-api-access-cfkb7\") pod \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\" (UID: \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\") " Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.493268 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2ngb\" (UniqueName: \"kubernetes.io/projected/bd4662bb-5179-4f57-8571-5198dcf69bdb-kube-api-access-g2ngb\") pod \"bd4662bb-5179-4f57-8571-5198dcf69bdb\" (UID: \"bd4662bb-5179-4f57-8571-5198dcf69bdb\") " Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.493304 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-operator-scripts\") pod \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\" (UID: \"9cd4cfd3-72ab-45d6-8683-85acb6cadf66\") " Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.493042 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee552380-8d96-4a10-a5b8-2deb2e73b15f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee552380-8d96-4a10-a5b8-2deb2e73b15f" (UID: "ee552380-8d96-4a10-a5b8-2deb2e73b15f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.494669 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee552380-8d96-4a10-a5b8-2deb2e73b15f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.495162 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cd4cfd3-72ab-45d6-8683-85acb6cadf66" (UID: "9cd4cfd3-72ab-45d6-8683-85acb6cadf66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.495644 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd4662bb-5179-4f57-8571-5198dcf69bdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd4662bb-5179-4f57-8571-5198dcf69bdb" (UID: "bd4662bb-5179-4f57-8571-5198dcf69bdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.513492 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-kube-api-access-cfkb7" (OuterVolumeSpecName: "kube-api-access-cfkb7") pod "9cd4cfd3-72ab-45d6-8683-85acb6cadf66" (UID: "9cd4cfd3-72ab-45d6-8683-85acb6cadf66"). InnerVolumeSpecName "kube-api-access-cfkb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.515796 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd4662bb-5179-4f57-8571-5198dcf69bdb-kube-api-access-g2ngb" (OuterVolumeSpecName: "kube-api-access-g2ngb") pod "bd4662bb-5179-4f57-8571-5198dcf69bdb" (UID: "bd4662bb-5179-4f57-8571-5198dcf69bdb"). InnerVolumeSpecName "kube-api-access-g2ngb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.516331 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee552380-8d96-4a10-a5b8-2deb2e73b15f-kube-api-access-2wl59" (OuterVolumeSpecName: "kube-api-access-2wl59") pod "ee552380-8d96-4a10-a5b8-2deb2e73b15f" (UID: "ee552380-8d96-4a10-a5b8-2deb2e73b15f"). InnerVolumeSpecName "kube-api-access-2wl59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.596187 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd4662bb-5179-4f57-8571-5198dcf69bdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.596206 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wl59\" (UniqueName: \"kubernetes.io/projected/ee552380-8d96-4a10-a5b8-2deb2e73b15f-kube-api-access-2wl59\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.596216 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfkb7\" (UniqueName: \"kubernetes.io/projected/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-kube-api-access-cfkb7\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.596225 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2ngb\" (UniqueName: \"kubernetes.io/projected/bd4662bb-5179-4f57-8571-5198dcf69bdb-kube-api-access-g2ngb\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:30 crc kubenswrapper[4676]: I0124 00:23:30.596235 4676 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd4cfd3-72ab-45d6-8683-85acb6cadf66-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.038644 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.215544 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-combined-ca-bundle\") pod \"1d557c78-075a-44f7-a530-860ae3ec8ffd\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.215623 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-httpd-config\") pod \"1d557c78-075a-44f7-a530-860ae3ec8ffd\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.215664 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-ovndb-tls-certs\") pod \"1d557c78-075a-44f7-a530-860ae3ec8ffd\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.215695 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkwcg\" (UniqueName: \"kubernetes.io/projected/1d557c78-075a-44f7-a530-860ae3ec8ffd-kube-api-access-kkwcg\") pod \"1d557c78-075a-44f7-a530-860ae3ec8ffd\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.215749 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-config\") pod \"1d557c78-075a-44f7-a530-860ae3ec8ffd\" (UID: \"1d557c78-075a-44f7-a530-860ae3ec8ffd\") " Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.233454 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1d557c78-075a-44f7-a530-860ae3ec8ffd" (UID: "1d557c78-075a-44f7-a530-860ae3ec8ffd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.237396 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d557c78-075a-44f7-a530-860ae3ec8ffd-kube-api-access-kkwcg" (OuterVolumeSpecName: "kube-api-access-kkwcg") pod "1d557c78-075a-44f7-a530-860ae3ec8ffd" (UID: "1d557c78-075a-44f7-a530-860ae3ec8ffd"). InnerVolumeSpecName "kube-api-access-kkwcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.317473 4676 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.317506 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkwcg\" (UniqueName: \"kubernetes.io/projected/1d557c78-075a-44f7-a530-860ae3ec8ffd-kube-api-access-kkwcg\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.330077 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d557c78-075a-44f7-a530-860ae3ec8ffd" (UID: "1d557c78-075a-44f7-a530-860ae3ec8ffd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.368927 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-config" (OuterVolumeSpecName: "config") pod "1d557c78-075a-44f7-a530-860ae3ec8ffd" (UID: "1d557c78-075a-44f7-a530-860ae3ec8ffd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.370531 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1d557c78-075a-44f7-a530-860ae3ec8ffd" (UID: "1d557c78-075a-44f7-a530-860ae3ec8ffd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.386972 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b7wzs" event={"ID":"9cd4cfd3-72ab-45d6-8683-85acb6cadf66","Type":"ContainerDied","Data":"19a2176ef505fcf59da905e70e48985ee4a9f5afc8429b0b5550316e0527bf2e"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.387299 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19a2176ef505fcf59da905e70e48985ee4a9f5afc8429b0b5550316e0527bf2e" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.387398 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b7wzs" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.399492 4676 generic.go:334] "Generic (PLEG): container finished" podID="ac7dce6b-3bd9-4ad9-9485-83d9384b8bad" containerID="c8c367e4ea3e593fa82e51f67b7f8371ac3844af82364461f69e4513b638a905" exitCode=137 Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.399559 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f876ddf46-fs7qv" event={"ID":"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad","Type":"ContainerDied","Data":"c8c367e4ea3e593fa82e51f67b7f8371ac3844af82364461f69e4513b638a905"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.399585 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f876ddf46-fs7qv" event={"ID":"ac7dce6b-3bd9-4ad9-9485-83d9384b8bad","Type":"ContainerStarted","Data":"7eea686e131fa26ee3ef738923c53a453d90d9eb552a0df5b750dce59c1c117a"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.404993 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6cef518-8385-4315-83d5-f46a6144d5a0","Type":"ContainerStarted","Data":"d986b5a9792da22ccefff73326e189cc04f8e031e47dca7d8b188a0f58a32bbd"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.406967 4676 generic.go:334] "Generic (PLEG): container finished" podID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerID="90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e" exitCode=0 Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.407022 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756b7b5794-xhgs6" event={"ID":"1d557c78-075a-44f7-a530-860ae3ec8ffd","Type":"ContainerDied","Data":"90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.407047 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756b7b5794-xhgs6" event={"ID":"1d557c78-075a-44f7-a530-860ae3ec8ffd","Type":"ContainerDied","Data":"2a37de0875eb36fe5f67aef1544f7cf8a8b0cd17d7552f5a5cd3065c861671b8"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.407063 4676 scope.go:117] "RemoveContainer" containerID="ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.407167 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-756b7b5794-xhgs6" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.410001 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"042ec835-b8c1-43be-a19d-d70f76128e26","Type":"ContainerStarted","Data":"b1b68a26c2430d43efd21845e6480121691d5af08b91905ad5de393c697d49d7"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.411221 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bpr95" event={"ID":"ee552380-8d96-4a10-a5b8-2deb2e73b15f","Type":"ContainerDied","Data":"0b915aba37b3b554522a226fb9001f30eb50da19436816f3096df59862d25de5"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.411300 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b915aba37b3b554522a226fb9001f30eb50da19436816f3096df59862d25de5" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.411760 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bpr95" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.416004 4676 generic.go:334] "Generic (PLEG): container finished" podID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerID="9b6a2140f368edfea407714ce9a9afb7bd7057997de5cfab1861b7a16523894e" exitCode=137 Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.416074 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5f9c-account-create-update-9tmf6" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.418308 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf988b4bd-ls7hp" event={"ID":"9d2451a0-4896-46e4-9b9e-e309ccdf02f2","Type":"ContainerDied","Data":"9b6a2140f368edfea407714ce9a9afb7bd7057997de5cfab1861b7a16523894e"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.418360 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf988b4bd-ls7hp" event={"ID":"9d2451a0-4896-46e4-9b9e-e309ccdf02f2","Type":"ContainerStarted","Data":"6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64"} Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.419868 4676 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.419889 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.419898 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d557c78-075a-44f7-a530-860ae3ec8ffd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.449891 4676 scope.go:117] "RemoveContainer" containerID="90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.487179 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.487161361 podStartE2EDuration="4.487161361s" podCreationTimestamp="2026-01-24 00:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:31.458550151 +0000 UTC m=+1195.488521152" watchObservedRunningTime="2026-01-24 00:23:31.487161361 +0000 UTC m=+1195.517132352" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.502006 4676 scope.go:117] "RemoveContainer" containerID="ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a" Jan 24 00:23:31 crc kubenswrapper[4676]: E0124 00:23:31.503477 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a\": container with ID starting with ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a not found: ID does not exist" containerID="ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.503504 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a"} err="failed to get container status \"ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a\": rpc error: code = NotFound desc = could not find container \"ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a\": container with ID starting with ac9099c2dcc1c109c19d71ba0aa8ba6b06382c4d9cd67585b0af07f9df929d7a not found: ID does not exist" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.503522 4676 scope.go:117] "RemoveContainer" containerID="90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e" Jan 24 00:23:31 crc kubenswrapper[4676]: E0124 00:23:31.505257 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e\": container with ID starting with 90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e not found: ID does not exist" containerID="90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.505277 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e"} err="failed to get container status \"90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e\": rpc error: code = NotFound desc = could not find container \"90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e\": container with ID starting with 90033ae7fbe939cb7401b9fbf3c1fcdc688cc9fa72836aaba5d889149f24959e not found: ID does not exist" Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.531425 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-756b7b5794-xhgs6"] Jan 24 00:23:31 crc kubenswrapper[4676]: I0124 00:23:31.573862 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-756b7b5794-xhgs6"] Jan 24 00:23:32 crc kubenswrapper[4676]: I0124 00:23:32.264806 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" path="/var/lib/kubelet/pods/1d557c78-075a-44f7-a530-860ae3ec8ffd/volumes" Jan 24 00:23:32 crc kubenswrapper[4676]: I0124 00:23:32.426485 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"042ec835-b8c1-43be-a19d-d70f76128e26","Type":"ContainerStarted","Data":"40e61576763be5ee0a33a70a8b23ef0b97c5bf95ff7b4040c19eeaaaf7e15abc"} Jan 24 00:23:32 crc kubenswrapper[4676]: I0124 00:23:32.447421 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.4474051150000005 podStartE2EDuration="4.447405115s" podCreationTimestamp="2026-01-24 00:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:23:32.441700931 +0000 UTC m=+1196.471671932" watchObservedRunningTime="2026-01-24 00:23:32.447405115 +0000 UTC m=+1196.477376116" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.437284 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptgmr"] Jan 24 00:23:35 crc kubenswrapper[4676]: E0124 00:23:35.437900 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd4cfd3-72ab-45d6-8683-85acb6cadf66" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.437911 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd4cfd3-72ab-45d6-8683-85acb6cadf66" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: E0124 00:23:35.437923 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerName="neutron-api" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.437932 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerName="neutron-api" Jan 24 00:23:35 crc kubenswrapper[4676]: E0124 00:23:35.437940 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerName="neutron-httpd" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.437946 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerName="neutron-httpd" Jan 24 00:23:35 crc kubenswrapper[4676]: E0124 00:23:35.437960 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792cc535-5ed4-4e06-a19c-31ba34c7dfc7" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.437965 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="792cc535-5ed4-4e06-a19c-31ba34c7dfc7" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: E0124 00:23:35.437977 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4662bb-5179-4f57-8571-5198dcf69bdb" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.437984 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4662bb-5179-4f57-8571-5198dcf69bdb" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: E0124 00:23:35.437997 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f8ec9-ee62-4078-846c-291a47631ffb" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.438003 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f8ec9-ee62-4078-846c-291a47631ffb" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: E0124 00:23:35.438017 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d47faf0-8eaa-474f-8cde-aed6c10f4a05" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.438023 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d47faf0-8eaa-474f-8cde-aed6c10f4a05" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: E0124 00:23:35.438034 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee552380-8d96-4a10-a5b8-2deb2e73b15f" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.438041 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee552380-8d96-4a10-a5b8-2deb2e73b15f" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443304 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd4662bb-5179-4f57-8571-5198dcf69bdb" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443342 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd4cfd3-72ab-45d6-8683-85acb6cadf66" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443354 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="792cc535-5ed4-4e06-a19c-31ba34c7dfc7" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443367 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerName="neutron-httpd" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443380 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee552380-8d96-4a10-a5b8-2deb2e73b15f" containerName="mariadb-database-create" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443402 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5f8ec9-ee62-4078-846c-291a47631ffb" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443413 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d557c78-075a-44f7-a530-860ae3ec8ffd" containerName="neutron-api" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443426 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d47faf0-8eaa-474f-8cde-aed6c10f4a05" containerName="mariadb-account-create-update" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.443983 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.449818 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.450102 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-49ln5" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.450356 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.462674 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptgmr"] Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.491341 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-scripts\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.491410 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttltl\" (UniqueName: \"kubernetes.io/projected/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-kube-api-access-ttltl\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.491720 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.491783 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-config-data\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.596943 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.596999 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-config-data\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.597056 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-scripts\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.597092 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttltl\" (UniqueName: \"kubernetes.io/projected/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-kube-api-access-ttltl\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.613788 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-scripts\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.616068 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-config-data\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.616187 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.617085 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttltl\" (UniqueName: \"kubernetes.io/projected/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-kube-api-access-ttltl\") pod \"nova-cell0-conductor-db-sync-ptgmr\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:35 crc kubenswrapper[4676]: I0124 00:23:35.773276 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:23:36 crc kubenswrapper[4676]: I0124 00:23:36.299321 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptgmr"] Jan 24 00:23:36 crc kubenswrapper[4676]: I0124 00:23:36.475755 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" event={"ID":"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff","Type":"ContainerStarted","Data":"74f951080123fecb0a6fc99ce221e6ac1980472b308fae5306fc8a030c3089f7"} Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.026030 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.026284 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.063189 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.128157 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.490517 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.490564 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.706812 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.707067 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.748381 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:38 crc kubenswrapper[4676]: I0124 00:23:38.750189 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:39 crc kubenswrapper[4676]: I0124 00:23:39.364156 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:23:39 crc kubenswrapper[4676]: I0124 00:23:39.364206 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:23:39 crc kubenswrapper[4676]: I0124 00:23:39.364246 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:23:39 crc kubenswrapper[4676]: I0124 00:23:39.364953 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f13744ab61f6ff84c30249dfd3e19836649d7bb6c4e4a3db144939c565fd684d"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:23:39 crc kubenswrapper[4676]: I0124 00:23:39.365003 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://f13744ab61f6ff84c30249dfd3e19836649d7bb6c4e4a3db144939c565fd684d" gracePeriod=600 Jan 24 00:23:39 crc kubenswrapper[4676]: I0124 00:23:39.499072 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:39 crc kubenswrapper[4676]: I0124 00:23:39.499368 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.171709 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.172022 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.173534 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.321685 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.321953 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.323850 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f876ddf46-fs7qv" podUID="ac7dce6b-3bd9-4ad9-9485-83d9384b8bad" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.514629 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="f13744ab61f6ff84c30249dfd3e19836649d7bb6c4e4a3db144939c565fd684d" exitCode=0 Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.515439 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"f13744ab61f6ff84c30249dfd3e19836649d7bb6c4e4a3db144939c565fd684d"} Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.515463 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"6c0fdc4fa29c1a85e06a8e0b7d6899a3299af1dbe5c0f08e67bee66057d6c55e"} Jan 24 00:23:40 crc kubenswrapper[4676]: I0124 00:23:40.515479 4676 scope.go:117] "RemoveContainer" containerID="2687110039e3aba350a72ed3647bbafb008d22f301a8b50baa7159c6eca5ba33" Jan 24 00:23:41 crc kubenswrapper[4676]: I0124 00:23:41.388184 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 00:23:41 crc kubenswrapper[4676]: I0124 00:23:41.388611 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:23:41 crc kubenswrapper[4676]: I0124 00:23:41.398204 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 00:23:41 crc kubenswrapper[4676]: I0124 00:23:41.577675 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:23:41 crc kubenswrapper[4676]: I0124 00:23:41.577696 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:23:42 crc kubenswrapper[4676]: I0124 00:23:42.370126 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:42 crc kubenswrapper[4676]: I0124 00:23:42.587207 4676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:23:42 crc kubenswrapper[4676]: I0124 00:23:42.734015 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 00:23:49 crc kubenswrapper[4676]: I0124 00:23:49.649560 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" event={"ID":"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff","Type":"ContainerStarted","Data":"ba2f94acd9876a58363d3def84609810640e18145307211383bf62958eaadb26"} Jan 24 00:23:49 crc kubenswrapper[4676]: I0124 00:23:49.672429 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" podStartSLOduration=2.055133307 podStartE2EDuration="14.672411101s" podCreationTimestamp="2026-01-24 00:23:35 +0000 UTC" firstStartedPulling="2026-01-24 00:23:36.312905834 +0000 UTC m=+1200.342876835" lastFinishedPulling="2026-01-24 00:23:48.930183628 +0000 UTC m=+1212.960154629" observedRunningTime="2026-01-24 00:23:49.664836191 +0000 UTC m=+1213.694807192" watchObservedRunningTime="2026-01-24 00:23:49.672411101 +0000 UTC m=+1213.702382102" Jan 24 00:23:50 crc kubenswrapper[4676]: I0124 00:23:50.173490 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 24 00:23:50 crc kubenswrapper[4676]: I0124 00:23:50.322970 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f876ddf46-fs7qv" podUID="ac7dce6b-3bd9-4ad9-9485-83d9384b8bad" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.325632 4676 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod54d56910-d4b7-45b1-8699-5af7eaa29b96"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod54d56910-d4b7-45b1-8699-5af7eaa29b96] : Timed out while waiting for systemd to remove kubepods-besteffort-pod54d56910_d4b7_45b1_8699_5af7eaa29b96.slice" Jan 24 00:23:56 crc kubenswrapper[4676]: E0124 00:23:56.326152 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod54d56910-d4b7-45b1-8699-5af7eaa29b96] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod54d56910-d4b7-45b1-8699-5af7eaa29b96] : Timed out while waiting for systemd to remove kubepods-besteffort-pod54d56910_d4b7_45b1_8699_5af7eaa29b96.slice" pod="openstack/ceilometer-0" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.714064 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.799642 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.808512 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.840885 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.843135 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.846234 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.846449 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.858362 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.950834 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.951309 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdb9\" (UniqueName: \"kubernetes.io/projected/30206279-8386-4575-9457-e76760505e8d-kube-api-access-qhdb9\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.951467 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-log-httpd\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.951602 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-scripts\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.951714 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.951835 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-config-data\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:56 crc kubenswrapper[4676]: I0124 00:23:56.951950 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-run-httpd\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.053266 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-log-httpd\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.053607 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-scripts\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.053678 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-log-httpd\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.053761 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.053844 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-config-data\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.053960 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-run-httpd\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.054254 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-run-httpd\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.054466 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.054597 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdb9\" (UniqueName: \"kubernetes.io/projected/30206279-8386-4575-9457-e76760505e8d-kube-api-access-qhdb9\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.059900 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-config-data\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.060146 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-scripts\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.060819 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.067224 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.075352 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdb9\" (UniqueName: \"kubernetes.io/projected/30206279-8386-4575-9457-e76760505e8d-kube-api-access-qhdb9\") pod \"ceilometer-0\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.161532 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.639882 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:23:57 crc kubenswrapper[4676]: W0124 00:23:57.641560 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30206279_8386_4575_9457_e76760505e8d.slice/crio-27bde3b4f936759dfdde3e8fe3259a873617d7354ea2e22a706cd0edd462041d WatchSource:0}: Error finding container 27bde3b4f936759dfdde3e8fe3259a873617d7354ea2e22a706cd0edd462041d: Status 404 returned error can't find the container with id 27bde3b4f936759dfdde3e8fe3259a873617d7354ea2e22a706cd0edd462041d Jan 24 00:23:57 crc kubenswrapper[4676]: I0124 00:23:57.722958 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerStarted","Data":"27bde3b4f936759dfdde3e8fe3259a873617d7354ea2e22a706cd0edd462041d"} Jan 24 00:23:58 crc kubenswrapper[4676]: I0124 00:23:58.266556 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d56910-d4b7-45b1-8699-5af7eaa29b96" path="/var/lib/kubelet/pods/54d56910-d4b7-45b1-8699-5af7eaa29b96/volumes" Jan 24 00:23:58 crc kubenswrapper[4676]: I0124 00:23:58.732231 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerStarted","Data":"acf37fd15df58f0c480e8ca3019535bc0893909734220600f3f004d89aa3edc9"} Jan 24 00:23:59 crc kubenswrapper[4676]: I0124 00:23:59.741129 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerStarted","Data":"74fc38a94fa794a834f28752c4c0e574b0f6bd303f04b96af63769a91e8933fe"} Jan 24 00:24:00 crc kubenswrapper[4676]: I0124 00:24:00.749646 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerStarted","Data":"993ed8d33422939d102af22915be4b96f85bac159232e43af92e15ee1383a46a"} Jan 24 00:24:01 crc kubenswrapper[4676]: I0124 00:24:01.758240 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerStarted","Data":"cf2f47547ef5fe4c3cd5d6009235ea3a947829e0913a6e11bb70499604aaa440"} Jan 24 00:24:01 crc kubenswrapper[4676]: I0124 00:24:01.758869 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 00:24:01 crc kubenswrapper[4676]: I0124 00:24:01.802847 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.008078001 podStartE2EDuration="5.802822488s" podCreationTimestamp="2026-01-24 00:23:56 +0000 UTC" firstStartedPulling="2026-01-24 00:23:57.644403741 +0000 UTC m=+1221.674374742" lastFinishedPulling="2026-01-24 00:24:01.439148228 +0000 UTC m=+1225.469119229" observedRunningTime="2026-01-24 00:24:01.776520659 +0000 UTC m=+1225.806491660" watchObservedRunningTime="2026-01-24 00:24:01.802822488 +0000 UTC m=+1225.832793499" Jan 24 00:24:03 crc kubenswrapper[4676]: I0124 00:24:03.242792 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:24:03 crc kubenswrapper[4676]: I0124 00:24:03.375044 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:24:03 crc kubenswrapper[4676]: I0124 00:24:03.774557 4676 generic.go:334] "Generic (PLEG): container finished" podID="9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" containerID="ba2f94acd9876a58363d3def84609810640e18145307211383bf62958eaadb26" exitCode=0 Jan 24 00:24:03 crc kubenswrapper[4676]: I0124 00:24:03.774607 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" event={"ID":"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff","Type":"ContainerDied","Data":"ba2f94acd9876a58363d3def84609810640e18145307211383bf62958eaadb26"} Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.162033 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.322546 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttltl\" (UniqueName: \"kubernetes.io/projected/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-kube-api-access-ttltl\") pod \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.322585 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-combined-ca-bundle\") pod \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.322668 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-scripts\") pod \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.322707 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-config-data\") pod \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\" (UID: \"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff\") " Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.351615 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-scripts" (OuterVolumeSpecName: "scripts") pod "9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" (UID: "9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.354505 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-kube-api-access-ttltl" (OuterVolumeSpecName: "kube-api-access-ttltl") pod "9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" (UID: "9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff"). InnerVolumeSpecName "kube-api-access-ttltl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.398554 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" (UID: "9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.424305 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttltl\" (UniqueName: \"kubernetes.io/projected/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-kube-api-access-ttltl\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.424328 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.424338 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.426919 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-config-data" (OuterVolumeSpecName: "config-data") pod "9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" (UID: "9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.525733 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.790957 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" event={"ID":"9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff","Type":"ContainerDied","Data":"74f951080123fecb0a6fc99ce221e6ac1980472b308fae5306fc8a030c3089f7"} Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.790993 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74f951080123fecb0a6fc99ce221e6ac1980472b308fae5306fc8a030c3089f7" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.791048 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptgmr" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.854669 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-f876ddf46-fs7qv" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.874070 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:24:05 crc kubenswrapper[4676]: I0124 00:24:05.983969 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf988b4bd-ls7hp"] Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.009369 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 00:24:06 crc kubenswrapper[4676]: E0124 00:24:06.011076 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" containerName="nova-cell0-conductor-db-sync" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.011173 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" containerName="nova-cell0-conductor-db-sync" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.011419 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" containerName="nova-cell0-conductor-db-sync" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.012085 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.015341 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-49ln5" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.015654 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.028287 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.144213 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400ba963-913a-401c-8f2e-21005977e0c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.144287 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400ba963-913a-401c-8f2e-21005977e0c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.144334 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w88s8\" (UniqueName: \"kubernetes.io/projected/400ba963-913a-401c-8f2e-21005977e0c2-kube-api-access-w88s8\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.246455 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400ba963-913a-401c-8f2e-21005977e0c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.246513 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400ba963-913a-401c-8f2e-21005977e0c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.246546 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w88s8\" (UniqueName: \"kubernetes.io/projected/400ba963-913a-401c-8f2e-21005977e0c2-kube-api-access-w88s8\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.251973 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400ba963-913a-401c-8f2e-21005977e0c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.265843 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w88s8\" (UniqueName: \"kubernetes.io/projected/400ba963-913a-401c-8f2e-21005977e0c2-kube-api-access-w88s8\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.266803 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400ba963-913a-401c-8f2e-21005977e0c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"400ba963-913a-401c-8f2e-21005977e0c2\") " pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.346515 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.798532 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon-log" containerID="cri-o://ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb" gracePeriod=30 Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.798562 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" containerID="cri-o://6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64" gracePeriod=30 Jan 24 00:24:06 crc kubenswrapper[4676]: I0124 00:24:06.857704 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 00:24:07 crc kubenswrapper[4676]: I0124 00:24:07.807541 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"400ba963-913a-401c-8f2e-21005977e0c2","Type":"ContainerStarted","Data":"fb0ae3f2ccbfcd78e945abad4de061cf67626fc297fe90ae5677173d70a3448e"} Jan 24 00:24:07 crc kubenswrapper[4676]: I0124 00:24:07.807880 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"400ba963-913a-401c-8f2e-21005977e0c2","Type":"ContainerStarted","Data":"881e275f0396651176e32e8c7f8fbcde89d91200a6bb57f6d595d6c57c0fa093"} Jan 24 00:24:07 crc kubenswrapper[4676]: I0124 00:24:07.807903 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:07 crc kubenswrapper[4676]: I0124 00:24:07.829668 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.829653971 podStartE2EDuration="2.829653971s" podCreationTimestamp="2026-01-24 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:07.825766482 +0000 UTC m=+1231.855737483" watchObservedRunningTime="2026-01-24 00:24:07.829653971 +0000 UTC m=+1231.859624972" Jan 24 00:24:10 crc kubenswrapper[4676]: I0124 00:24:10.172480 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 24 00:24:10 crc kubenswrapper[4676]: I0124 00:24:10.831906 4676 generic.go:334] "Generic (PLEG): container finished" podID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerID="6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64" exitCode=0 Jan 24 00:24:10 crc kubenswrapper[4676]: I0124 00:24:10.832010 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf988b4bd-ls7hp" event={"ID":"9d2451a0-4896-46e4-9b9e-e309ccdf02f2","Type":"ContainerDied","Data":"6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64"} Jan 24 00:24:10 crc kubenswrapper[4676]: I0124 00:24:10.832215 4676 scope.go:117] "RemoveContainer" containerID="9b6a2140f368edfea407714ce9a9afb7bd7057997de5cfab1861b7a16523894e" Jan 24 00:24:16 crc kubenswrapper[4676]: I0124 00:24:16.398362 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 24 00:24:16 crc kubenswrapper[4676]: I0124 00:24:16.964817 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2cwbv"] Jan 24 00:24:16 crc kubenswrapper[4676]: I0124 00:24:16.966867 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:16 crc kubenswrapper[4676]: I0124 00:24:16.973283 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 24 00:24:16 crc kubenswrapper[4676]: I0124 00:24:16.981501 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 24 00:24:16 crc kubenswrapper[4676]: I0124 00:24:16.988638 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2cwbv"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.075585 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-scripts\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.075622 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j96h\" (UniqueName: \"kubernetes.io/projected/8ee94537-f601-4765-95cb-d56518fb7fc6-kube-api-access-2j96h\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.075677 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-config-data\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.075701 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.133395 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.134771 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.141221 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.154283 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.180172 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.180744 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-scripts\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.180762 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j96h\" (UniqueName: \"kubernetes.io/projected/8ee94537-f601-4765-95cb-d56518fb7fc6-kube-api-access-2j96h\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.180813 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-config-data\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.195137 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-config-data\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.196635 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-scripts\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.199098 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.200127 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.202546 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.208920 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.240416 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.253563 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j96h\" (UniqueName: \"kubernetes.io/projected/8ee94537-f601-4765-95cb-d56518fb7fc6-kube-api-access-2j96h\") pod \"nova-cell0-cell-mapping-2cwbv\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.282255 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-config-data\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.282303 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdvz\" (UniqueName: \"kubernetes.io/projected/0d70bd7f-2410-490a-897f-c17149dbc01d-kube-api-access-fsdvz\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.282341 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.282362 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzqhw\" (UniqueName: \"kubernetes.io/projected/98b86cf1-9245-4a23-aac1-262c54e60716-kube-api-access-pzqhw\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.282473 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d70bd7f-2410-490a-897f-c17149dbc01d-logs\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.282508 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-config-data\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.282550 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.306339 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.388031 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.388093 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-config-data\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.388118 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdvz\" (UniqueName: \"kubernetes.io/projected/0d70bd7f-2410-490a-897f-c17149dbc01d-kube-api-access-fsdvz\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.388150 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.388172 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzqhw\" (UniqueName: \"kubernetes.io/projected/98b86cf1-9245-4a23-aac1-262c54e60716-kube-api-access-pzqhw\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.388278 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d70bd7f-2410-490a-897f-c17149dbc01d-logs\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.388300 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-config-data\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.395698 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d70bd7f-2410-490a-897f-c17149dbc01d-logs\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.399471 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.412179 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.412286 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-config-data\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.419902 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-config-data\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.498799 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzqhw\" (UniqueName: \"kubernetes.io/projected/98b86cf1-9245-4a23-aac1-262c54e60716-kube-api-access-pzqhw\") pod \"nova-scheduler-0\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.507034 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdvz\" (UniqueName: \"kubernetes.io/projected/0d70bd7f-2410-490a-897f-c17149dbc01d-kube-api-access-fsdvz\") pod \"nova-api-0\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.546555 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.547810 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.569712 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.580323 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.582751 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.594783 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.618824 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.636336 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.663251 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.700916 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.701157 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-config-data\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.701195 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.701221 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.701239 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9hf5\" (UniqueName: \"kubernetes.io/projected/65324a0f-f62a-4807-85d2-e4b607b2a0b4-kube-api-access-p9hf5\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.701259 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f77n\" (UniqueName: \"kubernetes.io/projected/84633021-2bad-4066-a01c-44fef6902524-kube-api-access-8f77n\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.701282 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84633021-2bad-4066-a01c-44fef6902524-logs\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.766607 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.802520 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.802617 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-config-data\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.802666 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.802693 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.802710 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9hf5\" (UniqueName: \"kubernetes.io/projected/65324a0f-f62a-4807-85d2-e4b607b2a0b4-kube-api-access-p9hf5\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.802735 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f77n\" (UniqueName: \"kubernetes.io/projected/84633021-2bad-4066-a01c-44fef6902524-kube-api-access-8f77n\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.802756 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84633021-2bad-4066-a01c-44fef6902524-logs\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.803194 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84633021-2bad-4066-a01c-44fef6902524-logs\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.822967 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.828486 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-config-data\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.830220 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.834863 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.846233 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f77n\" (UniqueName: \"kubernetes.io/projected/84633021-2bad-4066-a01c-44fef6902524-kube-api-access-8f77n\") pod \"nova-metadata-0\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.856392 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9hf5\" (UniqueName: \"kubernetes.io/projected/65324a0f-f62a-4807-85d2-e4b607b2a0b4-kube-api-access-p9hf5\") pod \"nova-cell1-novncproxy-0\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.883860 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.904400 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.952418 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-72qp2"] Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.962159 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:17 crc kubenswrapper[4676]: I0124 00:24:17.983020 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-72qp2"] Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.119628 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.119958 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.120000 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.120028 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-svc\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.120051 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2kxc\" (UniqueName: \"kubernetes.io/projected/b8c07196-aa1d-4d14-bc9f-6aec4de13853-kube-api-access-c2kxc\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.120087 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-config\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.223712 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.223780 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.223838 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.223875 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-svc\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.223897 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2kxc\" (UniqueName: \"kubernetes.io/projected/b8c07196-aa1d-4d14-bc9f-6aec4de13853-kube-api-access-c2kxc\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.223946 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-config\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.224839 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-config\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.224839 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.225010 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-svc\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.225350 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.225617 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.254035 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2kxc\" (UniqueName: \"kubernetes.io/projected/b8c07196-aa1d-4d14-bc9f-6aec4de13853-kube-api-access-c2kxc\") pod \"dnsmasq-dns-bccf8f775-72qp2\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.284534 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2cwbv"] Jan 24 00:24:18 crc kubenswrapper[4676]: W0124 00:24:18.308122 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee94537_f601_4765_95cb_d56518fb7fc6.slice/crio-94c2e3389aa98f25b2feddec05c3db32269606b8ba9977ffa3565e20fb1c9ec4 WatchSource:0}: Error finding container 94c2e3389aa98f25b2feddec05c3db32269606b8ba9977ffa3565e20fb1c9ec4: Status 404 returned error can't find the container with id 94c2e3389aa98f25b2feddec05c3db32269606b8ba9977ffa3565e20fb1c9ec4 Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.352958 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.684081 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.724088 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.729746 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.747320 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.834206 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:18 crc kubenswrapper[4676]: W0124 00:24:18.838477 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84633021_2bad_4066_a01c_44fef6902524.slice/crio-6e82aa991fb418472d65f2acfb31fecbeef7f12dbcdb1246d40b5110c0b6ab65 WatchSource:0}: Error finding container 6e82aa991fb418472d65f2acfb31fecbeef7f12dbcdb1246d40b5110c0b6ab65: Status 404 returned error can't find the container with id 6e82aa991fb418472d65f2acfb31fecbeef7f12dbcdb1246d40b5110c0b6ab65 Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.941654 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84633021-2bad-4066-a01c-44fef6902524","Type":"ContainerStarted","Data":"6e82aa991fb418472d65f2acfb31fecbeef7f12dbcdb1246d40b5110c0b6ab65"} Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.955824 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2cwbv" event={"ID":"8ee94537-f601-4765-95cb-d56518fb7fc6","Type":"ContainerStarted","Data":"78da04e8326c020891ea05f676639495efe89b95f4c1821c5af31b8929259b96"} Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.955877 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2cwbv" event={"ID":"8ee94537-f601-4765-95cb-d56518fb7fc6","Type":"ContainerStarted","Data":"94c2e3389aa98f25b2feddec05c3db32269606b8ba9977ffa3565e20fb1c9ec4"} Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.961623 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65324a0f-f62a-4807-85d2-e4b607b2a0b4","Type":"ContainerStarted","Data":"9d99431ded08302b9fd0007acf0016950fd97f2998e3a74b9630c85127cb16f9"} Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.968561 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98b86cf1-9245-4a23-aac1-262c54e60716","Type":"ContainerStarted","Data":"842457e0d43257943e9267d87b0c8b3c36220ce75035a13b025aa38f1c36e69d"} Jan 24 00:24:18 crc kubenswrapper[4676]: I0124 00:24:18.990502 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d70bd7f-2410-490a-897f-c17149dbc01d","Type":"ContainerStarted","Data":"db46016d5ba0953d53096b66a815db696cc1c6c9a8457b6a8c4cbfbde714e915"} Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.015459 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fpw6c"] Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.017356 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.022112 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.022497 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.038771 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fpw6c"] Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.067979 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2cwbv" podStartSLOduration=3.067956781 podStartE2EDuration="3.067956781s" podCreationTimestamp="2026-01-24 00:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:18.995139891 +0000 UTC m=+1243.025110882" watchObservedRunningTime="2026-01-24 00:24:19.067956781 +0000 UTC m=+1243.097927782" Jan 24 00:24:19 crc kubenswrapper[4676]: W0124 00:24:19.076300 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8c07196_aa1d_4d14_bc9f_6aec4de13853.slice/crio-149be033f93bb329597c455a0e6c6a8a209721f648e144a6195c8b2dcfc6602d WatchSource:0}: Error finding container 149be033f93bb329597c455a0e6c6a8a209721f648e144a6195c8b2dcfc6602d: Status 404 returned error can't find the container with id 149be033f93bb329597c455a0e6c6a8a209721f648e144a6195c8b2dcfc6602d Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.081810 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-72qp2"] Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.169467 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2zrw\" (UniqueName: \"kubernetes.io/projected/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-kube-api-access-j2zrw\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.169526 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.169592 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-scripts\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.169667 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-config-data\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.271813 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-config-data\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.272273 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2zrw\" (UniqueName: \"kubernetes.io/projected/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-kube-api-access-j2zrw\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.272360 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.273454 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-scripts\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.277067 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.281366 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-scripts\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.287612 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-config-data\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.297118 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2zrw\" (UniqueName: \"kubernetes.io/projected/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-kube-api-access-j2zrw\") pod \"nova-cell1-conductor-db-sync-fpw6c\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.346509 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:19 crc kubenswrapper[4676]: I0124 00:24:19.916200 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fpw6c"] Jan 24 00:24:19 crc kubenswrapper[4676]: W0124 00:24:19.955564 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1b05e80_51fb_476f_a2b4_bf5a290ea5ae.slice/crio-536f2c9f1d41a95dfb3c6cf31c7557c7581ee9ab704135d4570d8f54edee7545 WatchSource:0}: Error finding container 536f2c9f1d41a95dfb3c6cf31c7557c7581ee9ab704135d4570d8f54edee7545: Status 404 returned error can't find the container with id 536f2c9f1d41a95dfb3c6cf31c7557c7581ee9ab704135d4570d8f54edee7545 Jan 24 00:24:20 crc kubenswrapper[4676]: I0124 00:24:20.010935 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" event={"ID":"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae","Type":"ContainerStarted","Data":"536f2c9f1d41a95dfb3c6cf31c7557c7581ee9ab704135d4570d8f54edee7545"} Jan 24 00:24:20 crc kubenswrapper[4676]: I0124 00:24:20.014094 4676 generic.go:334] "Generic (PLEG): container finished" podID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" containerID="ab4cecb7436eb3bdc8075e2778a585be7c7bc22f8b1112d18ec061da1496853f" exitCode=0 Jan 24 00:24:20 crc kubenswrapper[4676]: I0124 00:24:20.014599 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" event={"ID":"b8c07196-aa1d-4d14-bc9f-6aec4de13853","Type":"ContainerDied","Data":"ab4cecb7436eb3bdc8075e2778a585be7c7bc22f8b1112d18ec061da1496853f"} Jan 24 00:24:20 crc kubenswrapper[4676]: I0124 00:24:20.014627 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" event={"ID":"b8c07196-aa1d-4d14-bc9f-6aec4de13853","Type":"ContainerStarted","Data":"149be033f93bb329597c455a0e6c6a8a209721f648e144a6195c8b2dcfc6602d"} Jan 24 00:24:20 crc kubenswrapper[4676]: I0124 00:24:20.174717 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 24 00:24:21 crc kubenswrapper[4676]: I0124 00:24:21.025571 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" event={"ID":"b8c07196-aa1d-4d14-bc9f-6aec4de13853","Type":"ContainerStarted","Data":"2dc83da30d1c8b3f8b6fad2b698ea560b01dbdda3cf6667c49b816b31947008d"} Jan 24 00:24:21 crc kubenswrapper[4676]: I0124 00:24:21.025910 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:21 crc kubenswrapper[4676]: I0124 00:24:21.029098 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" event={"ID":"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae","Type":"ContainerStarted","Data":"1e6d788c12343033e8113f7b2adf05ab1c0f4bf8cb67dce88248d068873baa87"} Jan 24 00:24:21 crc kubenswrapper[4676]: I0124 00:24:21.058448 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" podStartSLOduration=4.058420646 podStartE2EDuration="4.058420646s" podCreationTimestamp="2026-01-24 00:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:21.044787761 +0000 UTC m=+1245.074758752" watchObservedRunningTime="2026-01-24 00:24:21.058420646 +0000 UTC m=+1245.088391647" Jan 24 00:24:21 crc kubenswrapper[4676]: I0124 00:24:21.066948 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" podStartSLOduration=3.066931495 podStartE2EDuration="3.066931495s" podCreationTimestamp="2026-01-24 00:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:21.064160501 +0000 UTC m=+1245.094131502" watchObservedRunningTime="2026-01-24 00:24:21.066931495 +0000 UTC m=+1245.096902496" Jan 24 00:24:21 crc kubenswrapper[4676]: I0124 00:24:21.637074 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:21 crc kubenswrapper[4676]: I0124 00:24:21.664911 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.045079 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84633021-2bad-4066-a01c-44fef6902524","Type":"ContainerStarted","Data":"6301d47d4294644c70fd618b97e56c019270427fbdab822047ac3bfe88e7e594"} Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.045514 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84633021-2bad-4066-a01c-44fef6902524","Type":"ContainerStarted","Data":"b50dc415c434720b0458f8bff74066a1f5105e74beeb732a3d51cdf130cbe5d3"} Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.045637 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="84633021-2bad-4066-a01c-44fef6902524" containerName="nova-metadata-log" containerID="cri-o://b50dc415c434720b0458f8bff74066a1f5105e74beeb732a3d51cdf130cbe5d3" gracePeriod=30 Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.046072 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="84633021-2bad-4066-a01c-44fef6902524" containerName="nova-metadata-metadata" containerID="cri-o://6301d47d4294644c70fd618b97e56c019270427fbdab822047ac3bfe88e7e594" gracePeriod=30 Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.049751 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65324a0f-f62a-4807-85d2-e4b607b2a0b4","Type":"ContainerStarted","Data":"e4451937240be2f867c73971e845e8f6b7d4b6bee35057ae40b7f593e4b5d73c"} Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.049906 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="65324a0f-f62a-4807-85d2-e4b607b2a0b4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e4451937240be2f867c73971e845e8f6b7d4b6bee35057ae40b7f593e4b5d73c" gracePeriod=30 Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.062758 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98b86cf1-9245-4a23-aac1-262c54e60716","Type":"ContainerStarted","Data":"6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997"} Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.064465 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d70bd7f-2410-490a-897f-c17149dbc01d","Type":"ContainerStarted","Data":"d5aab805f55f9b9bf75ccb1416d0d46c71a7f8b8d382ea34d87b6fc4caf8ea0a"} Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.064562 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d70bd7f-2410-490a-897f-c17149dbc01d","Type":"ContainerStarted","Data":"9b214cec1541db1b7c8f4c7ce792f15f1a691f1915cb848b2cef3968d5d483b7"} Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.084643 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.701210031 podStartE2EDuration="6.083112395s" podCreationTimestamp="2026-01-24 00:24:17 +0000 UTC" firstStartedPulling="2026-01-24 00:24:18.850366999 +0000 UTC m=+1242.880338000" lastFinishedPulling="2026-01-24 00:24:22.232269363 +0000 UTC m=+1246.262240364" observedRunningTime="2026-01-24 00:24:23.074702088 +0000 UTC m=+1247.104673089" watchObservedRunningTime="2026-01-24 00:24:23.083112395 +0000 UTC m=+1247.113083396" Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.106180 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.608366912 podStartE2EDuration="6.106163687s" podCreationTimestamp="2026-01-24 00:24:17 +0000 UTC" firstStartedPulling="2026-01-24 00:24:18.736057686 +0000 UTC m=+1242.766028687" lastFinishedPulling="2026-01-24 00:24:22.233854461 +0000 UTC m=+1246.263825462" observedRunningTime="2026-01-24 00:24:23.095531704 +0000 UTC m=+1247.125502705" watchObservedRunningTime="2026-01-24 00:24:23.106163687 +0000 UTC m=+1247.136134688" Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.124557 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.656416536 podStartE2EDuration="6.124538167s" podCreationTimestamp="2026-01-24 00:24:17 +0000 UTC" firstStartedPulling="2026-01-24 00:24:18.773065694 +0000 UTC m=+1242.803036695" lastFinishedPulling="2026-01-24 00:24:22.241187325 +0000 UTC m=+1246.271158326" observedRunningTime="2026-01-24 00:24:23.111507 +0000 UTC m=+1247.141478001" watchObservedRunningTime="2026-01-24 00:24:23.124538167 +0000 UTC m=+1247.154509168" Jan 24 00:24:23 crc kubenswrapper[4676]: I0124 00:24:23.141971 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.632791246 podStartE2EDuration="6.141951858s" podCreationTimestamp="2026-01-24 00:24:17 +0000 UTC" firstStartedPulling="2026-01-24 00:24:18.723898455 +0000 UTC m=+1242.753869456" lastFinishedPulling="2026-01-24 00:24:22.233059067 +0000 UTC m=+1246.263030068" observedRunningTime="2026-01-24 00:24:23.13086436 +0000 UTC m=+1247.160835361" watchObservedRunningTime="2026-01-24 00:24:23.141951858 +0000 UTC m=+1247.171922859" Jan 24 00:24:24 crc kubenswrapper[4676]: I0124 00:24:24.081818 4676 generic.go:334] "Generic (PLEG): container finished" podID="84633021-2bad-4066-a01c-44fef6902524" containerID="b50dc415c434720b0458f8bff74066a1f5105e74beeb732a3d51cdf130cbe5d3" exitCode=143 Jan 24 00:24:24 crc kubenswrapper[4676]: I0124 00:24:24.081961 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84633021-2bad-4066-a01c-44fef6902524","Type":"ContainerDied","Data":"b50dc415c434720b0458f8bff74066a1f5105e74beeb732a3d51cdf130cbe5d3"} Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.166573 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.620218 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.620267 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.651353 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.767561 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.767608 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.884727 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.905014 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 00:24:27 crc kubenswrapper[4676]: I0124 00:24:27.905055 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.118360 4676 generic.go:334] "Generic (PLEG): container finished" podID="8ee94537-f601-4765-95cb-d56518fb7fc6" containerID="78da04e8326c020891ea05f676639495efe89b95f4c1821c5af31b8929259b96" exitCode=0 Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.118454 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2cwbv" event={"ID":"8ee94537-f601-4765-95cb-d56518fb7fc6","Type":"ContainerDied","Data":"78da04e8326c020891ea05f676639495efe89b95f4c1821c5af31b8929259b96"} Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.120526 4676 generic.go:334] "Generic (PLEG): container finished" podID="a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" containerID="1e6d788c12343033e8113f7b2adf05ab1c0f4bf8cb67dce88248d068873baa87" exitCode=0 Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.121415 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" event={"ID":"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae","Type":"ContainerDied","Data":"1e6d788c12343033e8113f7b2adf05ab1c0f4bf8cb67dce88248d068873baa87"} Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.165666 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.356305 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.410465 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jbptt"] Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.410960 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" podUID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" containerName="dnsmasq-dns" containerID="cri-o://3c1cdcc2ffd783555c7e0583ecb6ac91d2d0c15034969f7f5f6f8dba79f95867" gracePeriod=10 Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.884610 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 00:24:28 crc kubenswrapper[4676]: I0124 00:24:28.884855 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.129778 4676 generic.go:334] "Generic (PLEG): container finished" podID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" containerID="3c1cdcc2ffd783555c7e0583ecb6ac91d2d0c15034969f7f5f6f8dba79f95867" exitCode=0 Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.130297 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" event={"ID":"9b4acc7d-6a5f-4544-b73e-61b9a620c17d","Type":"ContainerDied","Data":"3c1cdcc2ffd783555c7e0583ecb6ac91d2d0c15034969f7f5f6f8dba79f95867"} Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.130629 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" event={"ID":"9b4acc7d-6a5f-4544-b73e-61b9a620c17d","Type":"ContainerDied","Data":"eec257beeff7ba3f585f1d2f0b787abd4407c78ad654a5cbdd01eb786faa02e2"} Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.130777 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec257beeff7ba3f585f1d2f0b787abd4407c78ad654a5cbdd01eb786faa02e2" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.189771 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.273057 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-config\") pod \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.273148 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-swift-storage-0\") pod \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.273302 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-sb\") pod \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.273342 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-svc\") pod \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.273465 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxx64\" (UniqueName: \"kubernetes.io/projected/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-kube-api-access-xxx64\") pod \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.273486 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-nb\") pod \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\" (UID: \"9b4acc7d-6a5f-4544-b73e-61b9a620c17d\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.308503 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-kube-api-access-xxx64" (OuterVolumeSpecName: "kube-api-access-xxx64") pod "9b4acc7d-6a5f-4544-b73e-61b9a620c17d" (UID: "9b4acc7d-6a5f-4544-b73e-61b9a620c17d"). InnerVolumeSpecName "kube-api-access-xxx64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.375647 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxx64\" (UniqueName: \"kubernetes.io/projected/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-kube-api-access-xxx64\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.393480 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b4acc7d-6a5f-4544-b73e-61b9a620c17d" (UID: "9b4acc7d-6a5f-4544-b73e-61b9a620c17d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.394849 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b4acc7d-6a5f-4544-b73e-61b9a620c17d" (UID: "9b4acc7d-6a5f-4544-b73e-61b9a620c17d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.401217 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-config" (OuterVolumeSpecName: "config") pod "9b4acc7d-6a5f-4544-b73e-61b9a620c17d" (UID: "9b4acc7d-6a5f-4544-b73e-61b9a620c17d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.411795 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b4acc7d-6a5f-4544-b73e-61b9a620c17d" (UID: "9b4acc7d-6a5f-4544-b73e-61b9a620c17d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.412758 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b4acc7d-6a5f-4544-b73e-61b9a620c17d" (UID: "9b4acc7d-6a5f-4544-b73e-61b9a620c17d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.481221 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.481243 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.481252 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.481261 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.481269 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4acc7d-6a5f-4544-b73e-61b9a620c17d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.731692 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.736866 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.786837 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-scripts\") pod \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.786887 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-combined-ca-bundle\") pod \"8ee94537-f601-4765-95cb-d56518fb7fc6\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.786967 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-config-data\") pod \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.787050 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2zrw\" (UniqueName: \"kubernetes.io/projected/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-kube-api-access-j2zrw\") pod \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.787107 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-scripts\") pod \"8ee94537-f601-4765-95cb-d56518fb7fc6\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.787124 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-combined-ca-bundle\") pod \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\" (UID: \"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.787167 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-config-data\") pod \"8ee94537-f601-4765-95cb-d56518fb7fc6\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.787242 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j96h\" (UniqueName: \"kubernetes.io/projected/8ee94537-f601-4765-95cb-d56518fb7fc6-kube-api-access-2j96h\") pod \"8ee94537-f601-4765-95cb-d56518fb7fc6\" (UID: \"8ee94537-f601-4765-95cb-d56518fb7fc6\") " Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.798273 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee94537-f601-4765-95cb-d56518fb7fc6-kube-api-access-2j96h" (OuterVolumeSpecName: "kube-api-access-2j96h") pod "8ee94537-f601-4765-95cb-d56518fb7fc6" (UID: "8ee94537-f601-4765-95cb-d56518fb7fc6"). InnerVolumeSpecName "kube-api-access-2j96h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.809674 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-scripts" (OuterVolumeSpecName: "scripts") pod "8ee94537-f601-4765-95cb-d56518fb7fc6" (UID: "8ee94537-f601-4765-95cb-d56518fb7fc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.814159 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-kube-api-access-j2zrw" (OuterVolumeSpecName: "kube-api-access-j2zrw") pod "a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" (UID: "a1b05e80-51fb-476f-a2b4-bf5a290ea5ae"). InnerVolumeSpecName "kube-api-access-j2zrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.821713 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-scripts" (OuterVolumeSpecName: "scripts") pod "a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" (UID: "a1b05e80-51fb-476f-a2b4-bf5a290ea5ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.833456 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" (UID: "a1b05e80-51fb-476f-a2b4-bf5a290ea5ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.836506 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-config-data" (OuterVolumeSpecName: "config-data") pod "a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" (UID: "a1b05e80-51fb-476f-a2b4-bf5a290ea5ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.839603 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ee94537-f601-4765-95cb-d56518fb7fc6" (UID: "8ee94537-f601-4765-95cb-d56518fb7fc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.854276 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-config-data" (OuterVolumeSpecName: "config-data") pod "8ee94537-f601-4765-95cb-d56518fb7fc6" (UID: "8ee94537-f601-4765-95cb-d56518fb7fc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.889319 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2zrw\" (UniqueName: \"kubernetes.io/projected/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-kube-api-access-j2zrw\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.889349 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.889358 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.889367 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.889389 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j96h\" (UniqueName: \"kubernetes.io/projected/8ee94537-f601-4765-95cb-d56518fb7fc6-kube-api-access-2j96h\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.889398 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.889407 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee94537-f601-4765-95cb-d56518fb7fc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:29 crc kubenswrapper[4676]: I0124 00:24:29.889416 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.139423 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2cwbv" event={"ID":"8ee94537-f601-4765-95cb-d56518fb7fc6","Type":"ContainerDied","Data":"94c2e3389aa98f25b2feddec05c3db32269606b8ba9977ffa3565e20fb1c9ec4"} Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.139475 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c2e3389aa98f25b2feddec05c3db32269606b8ba9977ffa3565e20fb1c9ec4" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.139490 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2cwbv" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.141343 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-jbptt" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.141352 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.141402 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fpw6c" event={"ID":"a1b05e80-51fb-476f-a2b4-bf5a290ea5ae","Type":"ContainerDied","Data":"536f2c9f1d41a95dfb3c6cf31c7557c7581ee9ab704135d4570d8f54edee7545"} Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.141442 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="536f2c9f1d41a95dfb3c6cf31c7557c7581ee9ab704135d4570d8f54edee7545" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.181104 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jbptt"] Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.181261 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bf988b4bd-ls7hp" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.181342 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.199682 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-jbptt"] Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.266334 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" path="/var/lib/kubelet/pods/9b4acc7d-6a5f-4544-b73e-61b9a620c17d/volumes" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.293803 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 00:24:30 crc kubenswrapper[4676]: E0124 00:24:30.294247 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" containerName="dnsmasq-dns" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.294262 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" containerName="dnsmasq-dns" Jan 24 00:24:30 crc kubenswrapper[4676]: E0124 00:24:30.294275 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee94537-f601-4765-95cb-d56518fb7fc6" containerName="nova-manage" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.294284 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee94537-f601-4765-95cb-d56518fb7fc6" containerName="nova-manage" Jan 24 00:24:30 crc kubenswrapper[4676]: E0124 00:24:30.294303 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" containerName="nova-cell1-conductor-db-sync" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.294309 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" containerName="nova-cell1-conductor-db-sync" Jan 24 00:24:30 crc kubenswrapper[4676]: E0124 00:24:30.294335 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" containerName="init" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.294341 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" containerName="init" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.294515 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4acc7d-6a5f-4544-b73e-61b9a620c17d" containerName="dnsmasq-dns" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.294528 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" containerName="nova-cell1-conductor-db-sync" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.294543 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee94537-f601-4765-95cb-d56518fb7fc6" containerName="nova-manage" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.295143 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.297674 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.303066 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.399416 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4420d197-3908-406b-9661-67d64ccd7768-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.399498 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4420d197-3908-406b-9661-67d64ccd7768-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.399535 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxh6\" (UniqueName: \"kubernetes.io/projected/4420d197-3908-406b-9661-67d64ccd7768-kube-api-access-cgxh6\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.424942 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.425154 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-log" containerID="cri-o://9b214cec1541db1b7c8f4c7ce792f15f1a691f1915cb848b2cef3968d5d483b7" gracePeriod=30 Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.425460 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-api" containerID="cri-o://d5aab805f55f9b9bf75ccb1416d0d46c71a7f8b8d382ea34d87b6fc4caf8ea0a" gracePeriod=30 Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.447979 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.448172 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="98b86cf1-9245-4a23-aac1-262c54e60716" containerName="nova-scheduler-scheduler" containerID="cri-o://6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997" gracePeriod=30 Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.500763 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgxh6\" (UniqueName: \"kubernetes.io/projected/4420d197-3908-406b-9661-67d64ccd7768-kube-api-access-cgxh6\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.501203 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4420d197-3908-406b-9661-67d64ccd7768-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.501269 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4420d197-3908-406b-9661-67d64ccd7768-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.505344 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4420d197-3908-406b-9661-67d64ccd7768-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.506326 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4420d197-3908-406b-9661-67d64ccd7768-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.527291 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgxh6\" (UniqueName: \"kubernetes.io/projected/4420d197-3908-406b-9661-67d64ccd7768-kube-api-access-cgxh6\") pod \"nova-cell1-conductor-0\" (UID: \"4420d197-3908-406b-9661-67d64ccd7768\") " pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:30 crc kubenswrapper[4676]: I0124 00:24:30.691680 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:31 crc kubenswrapper[4676]: I0124 00:24:31.150431 4676 generic.go:334] "Generic (PLEG): container finished" podID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerID="9b214cec1541db1b7c8f4c7ce792f15f1a691f1915cb848b2cef3968d5d483b7" exitCode=143 Jan 24 00:24:31 crc kubenswrapper[4676]: I0124 00:24:31.150507 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d70bd7f-2410-490a-897f-c17149dbc01d","Type":"ContainerDied","Data":"9b214cec1541db1b7c8f4c7ce792f15f1a691f1915cb848b2cef3968d5d483b7"} Jan 24 00:24:31 crc kubenswrapper[4676]: I0124 00:24:31.233137 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 00:24:32 crc kubenswrapper[4676]: I0124 00:24:32.158604 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4420d197-3908-406b-9661-67d64ccd7768","Type":"ContainerStarted","Data":"7adaaa21bfe2214008bad41a454b7f4029447556182d11dec2c3d953ee0b1a9d"} Jan 24 00:24:32 crc kubenswrapper[4676]: I0124 00:24:32.158854 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4420d197-3908-406b-9661-67d64ccd7768","Type":"ContainerStarted","Data":"622536c9ee0ca1723858277205b67fe8775384dce8b6bb05d90cf3dd2d1ed2d6"} Jan 24 00:24:32 crc kubenswrapper[4676]: I0124 00:24:32.159039 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:32 crc kubenswrapper[4676]: E0124 00:24:32.632463 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:24:32 crc kubenswrapper[4676]: E0124 00:24:32.638222 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:24:32 crc kubenswrapper[4676]: E0124 00:24:32.650338 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:24:32 crc kubenswrapper[4676]: E0124 00:24:32.650463 4676 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="98b86cf1-9245-4a23-aac1-262c54e60716" containerName="nova-scheduler-scheduler" Jan 24 00:24:33 crc kubenswrapper[4676]: I0124 00:24:33.067612 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.067593391 podStartE2EDuration="3.067593391s" podCreationTimestamp="2026-01-24 00:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:32.171664195 +0000 UTC m=+1256.201635196" watchObservedRunningTime="2026-01-24 00:24:33.067593391 +0000 UTC m=+1257.097564382" Jan 24 00:24:33 crc kubenswrapper[4676]: I0124 00:24:33.074231 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:24:33 crc kubenswrapper[4676]: I0124 00:24:33.074439 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c626d726-d406-41c7-8e66-07cc148d3aa7" containerName="kube-state-metrics" containerID="cri-o://0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62" gracePeriod=30 Jan 24 00:24:33 crc kubenswrapper[4676]: I0124 00:24:33.595149 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 00:24:33 crc kubenswrapper[4676]: I0124 00:24:33.656657 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqg2d\" (UniqueName: \"kubernetes.io/projected/c626d726-d406-41c7-8e66-07cc148d3aa7-kube-api-access-vqg2d\") pod \"c626d726-d406-41c7-8e66-07cc148d3aa7\" (UID: \"c626d726-d406-41c7-8e66-07cc148d3aa7\") " Jan 24 00:24:33 crc kubenswrapper[4676]: I0124 00:24:33.702005 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c626d726-d406-41c7-8e66-07cc148d3aa7-kube-api-access-vqg2d" (OuterVolumeSpecName: "kube-api-access-vqg2d") pod "c626d726-d406-41c7-8e66-07cc148d3aa7" (UID: "c626d726-d406-41c7-8e66-07cc148d3aa7"). InnerVolumeSpecName "kube-api-access-vqg2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:33 crc kubenswrapper[4676]: I0124 00:24:33.775695 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqg2d\" (UniqueName: \"kubernetes.io/projected/c626d726-d406-41c7-8e66-07cc148d3aa7-kube-api-access-vqg2d\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.175410 4676 generic.go:334] "Generic (PLEG): container finished" podID="c626d726-d406-41c7-8e66-07cc148d3aa7" containerID="0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62" exitCode=2 Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.175696 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c626d726-d406-41c7-8e66-07cc148d3aa7","Type":"ContainerDied","Data":"0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62"} Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.175721 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c626d726-d406-41c7-8e66-07cc148d3aa7","Type":"ContainerDied","Data":"a0d93a838adacca6f5b952cbfd2b7bfc9bf0b2ee8a2c8c9cf055df1b2e54dbca"} Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.175736 4676 scope.go:117] "RemoveContainer" containerID="0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.175863 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.202480 4676 scope.go:117] "RemoveContainer" containerID="0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62" Jan 24 00:24:34 crc kubenswrapper[4676]: E0124 00:24:34.203336 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62\": container with ID starting with 0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62 not found: ID does not exist" containerID="0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.203392 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62"} err="failed to get container status \"0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62\": rpc error: code = NotFound desc = could not find container \"0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62\": container with ID starting with 0c6072b7b64b06470b62859afd989687c08854e4403eafaa80685de7b9b6ba62 not found: ID does not exist" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.207836 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.216720 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.229416 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:24:34 crc kubenswrapper[4676]: E0124 00:24:34.229994 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c626d726-d406-41c7-8e66-07cc148d3aa7" containerName="kube-state-metrics" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.230061 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="c626d726-d406-41c7-8e66-07cc148d3aa7" containerName="kube-state-metrics" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.230300 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="c626d726-d406-41c7-8e66-07cc148d3aa7" containerName="kube-state-metrics" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.231323 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.234081 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.234405 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.239883 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.274959 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c626d726-d406-41c7-8e66-07cc148d3aa7" path="/var/lib/kubelet/pods/c626d726-d406-41c7-8e66-07cc148d3aa7/volumes" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.285455 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.285497 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.285516 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.285570 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjsqf\" (UniqueName: \"kubernetes.io/projected/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-api-access-pjsqf\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.387993 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.388220 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.388303 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.388404 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsqf\" (UniqueName: \"kubernetes.io/projected/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-api-access-pjsqf\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.393440 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.393810 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.406328 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.410176 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsqf\" (UniqueName: \"kubernetes.io/projected/25ee749c-0b84-4abd-9fe0-a6f23c0c912d-kube-api-access-pjsqf\") pod \"kube-state-metrics-0\" (UID: \"25ee749c-0b84-4abd-9fe0-a6f23c0c912d\") " pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.601499 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.734972 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.795631 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzqhw\" (UniqueName: \"kubernetes.io/projected/98b86cf1-9245-4a23-aac1-262c54e60716-kube-api-access-pzqhw\") pod \"98b86cf1-9245-4a23-aac1-262c54e60716\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.795955 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-config-data\") pod \"98b86cf1-9245-4a23-aac1-262c54e60716\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.796147 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-combined-ca-bundle\") pod \"98b86cf1-9245-4a23-aac1-262c54e60716\" (UID: \"98b86cf1-9245-4a23-aac1-262c54e60716\") " Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.814904 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b86cf1-9245-4a23-aac1-262c54e60716-kube-api-access-pzqhw" (OuterVolumeSpecName: "kube-api-access-pzqhw") pod "98b86cf1-9245-4a23-aac1-262c54e60716" (UID: "98b86cf1-9245-4a23-aac1-262c54e60716"). InnerVolumeSpecName "kube-api-access-pzqhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.875672 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-config-data" (OuterVolumeSpecName: "config-data") pod "98b86cf1-9245-4a23-aac1-262c54e60716" (UID: "98b86cf1-9245-4a23-aac1-262c54e60716"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.877551 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98b86cf1-9245-4a23-aac1-262c54e60716" (UID: "98b86cf1-9245-4a23-aac1-262c54e60716"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.899794 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzqhw\" (UniqueName: \"kubernetes.io/projected/98b86cf1-9245-4a23-aac1-262c54e60716-kube-api-access-pzqhw\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.899823 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:34 crc kubenswrapper[4676]: I0124 00:24:34.899833 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b86cf1-9245-4a23-aac1-262c54e60716-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.183336 4676 generic.go:334] "Generic (PLEG): container finished" podID="98b86cf1-9245-4a23-aac1-262c54e60716" containerID="6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997" exitCode=0 Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.183558 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98b86cf1-9245-4a23-aac1-262c54e60716","Type":"ContainerDied","Data":"6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997"} Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.183643 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98b86cf1-9245-4a23-aac1-262c54e60716","Type":"ContainerDied","Data":"842457e0d43257943e9267d87b0c8b3c36220ce75035a13b025aa38f1c36e69d"} Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.183650 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.183693 4676 scope.go:117] "RemoveContainer" containerID="6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.202805 4676 scope.go:117] "RemoveContainer" containerID="6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997" Jan 24 00:24:35 crc kubenswrapper[4676]: E0124 00:24:35.203598 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997\": container with ID starting with 6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997 not found: ID does not exist" containerID="6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.203622 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997"} err="failed to get container status \"6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997\": rpc error: code = NotFound desc = could not find container \"6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997\": container with ID starting with 6fd839294bba279baff5cde259e88fab5755b95b8b906e8844a646f2f9e0e997 not found: ID does not exist" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.217152 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.224158 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.242168 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:35 crc kubenswrapper[4676]: E0124 00:24:35.242526 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b86cf1-9245-4a23-aac1-262c54e60716" containerName="nova-scheduler-scheduler" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.242542 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b86cf1-9245-4a23-aac1-262c54e60716" containerName="nova-scheduler-scheduler" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.242718 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b86cf1-9245-4a23-aac1-262c54e60716" containerName="nova-scheduler-scheduler" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.243256 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.250055 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.267823 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.316405 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.316727 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vr9g\" (UniqueName: \"kubernetes.io/projected/32d678d8-bc5d-4993-afc3-c6700d53b00e-kube-api-access-5vr9g\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.316880 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-config-data\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.324692 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.421513 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.421935 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vr9g\" (UniqueName: \"kubernetes.io/projected/32d678d8-bc5d-4993-afc3-c6700d53b00e-kube-api-access-5vr9g\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.421968 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-config-data\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.429999 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.430270 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-config-data\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.445011 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vr9g\" (UniqueName: \"kubernetes.io/projected/32d678d8-bc5d-4993-afc3-c6700d53b00e-kube-api-access-5vr9g\") pod \"nova-scheduler-0\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " pod="openstack/nova-scheduler-0" Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.547701 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.547948 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="ceilometer-central-agent" containerID="cri-o://acf37fd15df58f0c480e8ca3019535bc0893909734220600f3f004d89aa3edc9" gracePeriod=30 Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.548011 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="proxy-httpd" containerID="cri-o://cf2f47547ef5fe4c3cd5d6009235ea3a947829e0913a6e11bb70499604aaa440" gracePeriod=30 Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.548031 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="sg-core" containerID="cri-o://993ed8d33422939d102af22915be4b96f85bac159232e43af92e15ee1383a46a" gracePeriod=30 Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.548104 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="ceilometer-notification-agent" containerID="cri-o://74fc38a94fa794a834f28752c4c0e574b0f6bd303f04b96af63769a91e8933fe" gracePeriod=30 Jan 24 00:24:35 crc kubenswrapper[4676]: I0124 00:24:35.646517 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.169256 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.211191 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.213777 4676 generic.go:334] "Generic (PLEG): container finished" podID="30206279-8386-4575-9457-e76760505e8d" containerID="cf2f47547ef5fe4c3cd5d6009235ea3a947829e0913a6e11bb70499604aaa440" exitCode=0 Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.213827 4676 generic.go:334] "Generic (PLEG): container finished" podID="30206279-8386-4575-9457-e76760505e8d" containerID="993ed8d33422939d102af22915be4b96f85bac159232e43af92e15ee1383a46a" exitCode=2 Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.213837 4676 generic.go:334] "Generic (PLEG): container finished" podID="30206279-8386-4575-9457-e76760505e8d" containerID="acf37fd15df58f0c480e8ca3019535bc0893909734220600f3f004d89aa3edc9" exitCode=0 Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.213878 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerDied","Data":"cf2f47547ef5fe4c3cd5d6009235ea3a947829e0913a6e11bb70499604aaa440"} Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.213930 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerDied","Data":"993ed8d33422939d102af22915be4b96f85bac159232e43af92e15ee1383a46a"} Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.213939 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerDied","Data":"acf37fd15df58f0c480e8ca3019535bc0893909734220600f3f004d89aa3edc9"} Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.224533 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"32d678d8-bc5d-4993-afc3-c6700d53b00e","Type":"ContainerStarted","Data":"d01bc4c049fe259a880c270ae5608bf35ba2c89a0a3367424a3555670349c800"} Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.249707 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"25ee749c-0b84-4abd-9fe0-a6f23c0c912d","Type":"ContainerStarted","Data":"46aba89aa56be268b7b5a98dfb66b1b75ba3d5f9d305baac12a530ef54c5a879"} Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.249752 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"25ee749c-0b84-4abd-9fe0-a6f23c0c912d","Type":"ContainerStarted","Data":"4957534fb4d33c9d7ee550c51f7102ddf197f85e71897067791549e65d161c2a"} Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.251113 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.296719 4676 generic.go:334] "Generic (PLEG): container finished" podID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerID="d5aab805f55f9b9bf75ccb1416d0d46c71a7f8b8d382ea34d87b6fc4caf8ea0a" exitCode=0 Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.296960 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.340397 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b86cf1-9245-4a23-aac1-262c54e60716" path="/var/lib/kubelet/pods/98b86cf1-9245-4a23-aac1-262c54e60716/volumes" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.343502 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-combined-ca-bundle\") pod \"0d70bd7f-2410-490a-897f-c17149dbc01d\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.343579 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d70bd7f-2410-490a-897f-c17149dbc01d-logs\") pod \"0d70bd7f-2410-490a-897f-c17149dbc01d\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.343727 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-config-data\") pod \"0d70bd7f-2410-490a-897f-c17149dbc01d\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.343967 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsdvz\" (UniqueName: \"kubernetes.io/projected/0d70bd7f-2410-490a-897f-c17149dbc01d-kube-api-access-fsdvz\") pod \"0d70bd7f-2410-490a-897f-c17149dbc01d\" (UID: \"0d70bd7f-2410-490a-897f-c17149dbc01d\") " Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.344392 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d70bd7f-2410-490a-897f-c17149dbc01d-logs" (OuterVolumeSpecName: "logs") pod "0d70bd7f-2410-490a-897f-c17149dbc01d" (UID: "0d70bd7f-2410-490a-897f-c17149dbc01d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.344945 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d70bd7f-2410-490a-897f-c17149dbc01d-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.377364 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d70bd7f-2410-490a-897f-c17149dbc01d-kube-api-access-fsdvz" (OuterVolumeSpecName: "kube-api-access-fsdvz") pod "0d70bd7f-2410-490a-897f-c17149dbc01d" (UID: "0d70bd7f-2410-490a-897f-c17149dbc01d"). InnerVolumeSpecName "kube-api-access-fsdvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.385755 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-config-data" (OuterVolumeSpecName: "config-data") pod "0d70bd7f-2410-490a-897f-c17149dbc01d" (UID: "0d70bd7f-2410-490a-897f-c17149dbc01d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.397012 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.982400977 podStartE2EDuration="2.396991283s" podCreationTimestamp="2026-01-24 00:24:34 +0000 UTC" firstStartedPulling="2026-01-24 00:24:35.317434301 +0000 UTC m=+1259.347405302" lastFinishedPulling="2026-01-24 00:24:35.732024607 +0000 UTC m=+1259.761995608" observedRunningTime="2026-01-24 00:24:36.338829921 +0000 UTC m=+1260.368800922" watchObservedRunningTime="2026-01-24 00:24:36.396991283 +0000 UTC m=+1260.426962274" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.407963 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d70bd7f-2410-490a-897f-c17149dbc01d" (UID: "0d70bd7f-2410-490a-897f-c17149dbc01d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.446932 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsdvz\" (UniqueName: \"kubernetes.io/projected/0d70bd7f-2410-490a-897f-c17149dbc01d-kube-api-access-fsdvz\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.446969 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.446979 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d70bd7f-2410-490a-897f-c17149dbc01d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.474508 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d70bd7f-2410-490a-897f-c17149dbc01d","Type":"ContainerDied","Data":"d5aab805f55f9b9bf75ccb1416d0d46c71a7f8b8d382ea34d87b6fc4caf8ea0a"} Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.474667 4676 scope.go:117] "RemoveContainer" containerID="d5aab805f55f9b9bf75ccb1416d0d46c71a7f8b8d382ea34d87b6fc4caf8ea0a" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.499713 4676 scope.go:117] "RemoveContainer" containerID="9b214cec1541db1b7c8f4c7ce792f15f1a691f1915cb848b2cef3968d5d483b7" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.632622 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.640396 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.662775 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:36 crc kubenswrapper[4676]: E0124 00:24:36.663157 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-api" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.663173 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-api" Jan 24 00:24:36 crc kubenswrapper[4676]: E0124 00:24:36.663189 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-log" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.663195 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-log" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.663396 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-log" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.663412 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" containerName="nova-api-api" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.664285 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.666990 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.681641 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.752667 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.752754 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a8b8762-d78d-44ad-af70-5d53aced9c93-logs\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.753001 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbwbq\" (UniqueName: \"kubernetes.io/projected/3a8b8762-d78d-44ad-af70-5d53aced9c93-kube-api-access-zbwbq\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.753119 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-config-data\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.854184 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.854272 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a8b8762-d78d-44ad-af70-5d53aced9c93-logs\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.854344 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbwbq\" (UniqueName: \"kubernetes.io/projected/3a8b8762-d78d-44ad-af70-5d53aced9c93-kube-api-access-zbwbq\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.854398 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-config-data\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.854810 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a8b8762-d78d-44ad-af70-5d53aced9c93-logs\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.857982 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.862928 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-config-data\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.878934 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbwbq\" (UniqueName: \"kubernetes.io/projected/3a8b8762-d78d-44ad-af70-5d53aced9c93-kube-api-access-zbwbq\") pod \"nova-api-0\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " pod="openstack/nova-api-0" Jan 24 00:24:36 crc kubenswrapper[4676]: I0124 00:24:36.982874 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.084734 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.166776 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-tls-certs\") pod \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.167258 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-scripts\") pod \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.167846 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-secret-key\") pod \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.168584 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-logs\") pod \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.168636 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-config-data\") pod \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.168660 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-combined-ca-bundle\") pod \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.168728 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwffk\" (UniqueName: \"kubernetes.io/projected/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-kube-api-access-cwffk\") pod \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\" (UID: \"9d2451a0-4896-46e4-9b9e-e309ccdf02f2\") " Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.169894 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-logs" (OuterVolumeSpecName: "logs") pod "9d2451a0-4896-46e4-9b9e-e309ccdf02f2" (UID: "9d2451a0-4896-46e4-9b9e-e309ccdf02f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.174414 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-kube-api-access-cwffk" (OuterVolumeSpecName: "kube-api-access-cwffk") pod "9d2451a0-4896-46e4-9b9e-e309ccdf02f2" (UID: "9d2451a0-4896-46e4-9b9e-e309ccdf02f2"). InnerVolumeSpecName "kube-api-access-cwffk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.177554 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9d2451a0-4896-46e4-9b9e-e309ccdf02f2" (UID: "9d2451a0-4896-46e4-9b9e-e309ccdf02f2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.206132 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-config-data" (OuterVolumeSpecName: "config-data") pod "9d2451a0-4896-46e4-9b9e-e309ccdf02f2" (UID: "9d2451a0-4896-46e4-9b9e-e309ccdf02f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.208742 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-scripts" (OuterVolumeSpecName: "scripts") pod "9d2451a0-4896-46e4-9b9e-e309ccdf02f2" (UID: "9d2451a0-4896-46e4-9b9e-e309ccdf02f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.220027 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d2451a0-4896-46e4-9b9e-e309ccdf02f2" (UID: "9d2451a0-4896-46e4-9b9e-e309ccdf02f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.270513 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9d2451a0-4896-46e4-9b9e-e309ccdf02f2" (UID: "9d2451a0-4896-46e4-9b9e-e309ccdf02f2"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.271610 4676 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.271637 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.271646 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.271655 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.271663 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwffk\" (UniqueName: \"kubernetes.io/projected/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-kube-api-access-cwffk\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.271675 4676 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.271684 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2451a0-4896-46e4-9b9e-e309ccdf02f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.308099 4676 generic.go:334] "Generic (PLEG): container finished" podID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerID="ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb" exitCode=137 Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.308153 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf988b4bd-ls7hp" event={"ID":"9d2451a0-4896-46e4-9b9e-e309ccdf02f2","Type":"ContainerDied","Data":"ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb"} Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.308172 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf988b4bd-ls7hp" event={"ID":"9d2451a0-4896-46e4-9b9e-e309ccdf02f2","Type":"ContainerDied","Data":"e16c7aef4fd28d5aab8b64abddb94def56344a74657a3085b40c7035b723d59e"} Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.308188 4676 scope.go:117] "RemoveContainer" containerID="6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.308268 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf988b4bd-ls7hp" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.315606 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"32d678d8-bc5d-4993-afc3-c6700d53b00e","Type":"ContainerStarted","Data":"9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142"} Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.347915 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.347893365 podStartE2EDuration="2.347893365s" podCreationTimestamp="2026-01-24 00:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:37.330698431 +0000 UTC m=+1261.360669432" watchObservedRunningTime="2026-01-24 00:24:37.347893365 +0000 UTC m=+1261.377864356" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.419156 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf988b4bd-ls7hp"] Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.429073 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bf988b4bd-ls7hp"] Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.550149 4676 scope.go:117] "RemoveContainer" containerID="ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.567535 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.575758 4676 scope.go:117] "RemoveContainer" containerID="6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64" Jan 24 00:24:37 crc kubenswrapper[4676]: E0124 00:24:37.580017 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64\": container with ID starting with 6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64 not found: ID does not exist" containerID="6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.580056 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64"} err="failed to get container status \"6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64\": rpc error: code = NotFound desc = could not find container \"6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64\": container with ID starting with 6c39f8f7eb482d941eb276b021603978c3c819c52d54e3619ae39a0cd7e29c64 not found: ID does not exist" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.580083 4676 scope.go:117] "RemoveContainer" containerID="ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb" Jan 24 00:24:37 crc kubenswrapper[4676]: E0124 00:24:37.580575 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb\": container with ID starting with ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb not found: ID does not exist" containerID="ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb" Jan 24 00:24:37 crc kubenswrapper[4676]: I0124 00:24:37.580609 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb"} err="failed to get container status \"ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb\": rpc error: code = NotFound desc = could not find container \"ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb\": container with ID starting with ca1454b684ac38a1446a4ef033530125143bc9a4b12de7af96f54377b563b8fb not found: ID does not exist" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.286522 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d70bd7f-2410-490a-897f-c17149dbc01d" path="/var/lib/kubelet/pods/0d70bd7f-2410-490a-897f-c17149dbc01d/volumes" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.304769 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" path="/var/lib/kubelet/pods/9d2451a0-4896-46e4-9b9e-e309ccdf02f2/volumes" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.365474 4676 generic.go:334] "Generic (PLEG): container finished" podID="30206279-8386-4575-9457-e76760505e8d" containerID="74fc38a94fa794a834f28752c4c0e574b0f6bd303f04b96af63769a91e8933fe" exitCode=0 Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.365551 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerDied","Data":"74fc38a94fa794a834f28752c4c0e574b0f6bd303f04b96af63769a91e8933fe"} Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.378722 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a8b8762-d78d-44ad-af70-5d53aced9c93","Type":"ContainerStarted","Data":"261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0"} Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.378756 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a8b8762-d78d-44ad-af70-5d53aced9c93","Type":"ContainerStarted","Data":"3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1"} Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.378767 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a8b8762-d78d-44ad-af70-5d53aced9c93","Type":"ContainerStarted","Data":"000b1ff3c0019bc3bb4c62b73efcef9044b9001595b681004d951af702c62578"} Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.488008 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.508045 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5080280630000003 podStartE2EDuration="2.508028063s" podCreationTimestamp="2026-01-24 00:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:38.400341642 +0000 UTC m=+1262.430312643" watchObservedRunningTime="2026-01-24 00:24:38.508028063 +0000 UTC m=+1262.537999064" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.610421 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-config-data\") pod \"30206279-8386-4575-9457-e76760505e8d\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.610499 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-sg-core-conf-yaml\") pod \"30206279-8386-4575-9457-e76760505e8d\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.610543 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhdb9\" (UniqueName: \"kubernetes.io/projected/30206279-8386-4575-9457-e76760505e8d-kube-api-access-qhdb9\") pod \"30206279-8386-4575-9457-e76760505e8d\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.610610 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-run-httpd\") pod \"30206279-8386-4575-9457-e76760505e8d\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.610649 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-log-httpd\") pod \"30206279-8386-4575-9457-e76760505e8d\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.610795 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-combined-ca-bundle\") pod \"30206279-8386-4575-9457-e76760505e8d\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.610831 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-scripts\") pod \"30206279-8386-4575-9457-e76760505e8d\" (UID: \"30206279-8386-4575-9457-e76760505e8d\") " Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.611209 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30206279-8386-4575-9457-e76760505e8d" (UID: "30206279-8386-4575-9457-e76760505e8d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.611359 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30206279-8386-4575-9457-e76760505e8d" (UID: "30206279-8386-4575-9457-e76760505e8d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.611773 4676 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.611788 4676 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30206279-8386-4575-9457-e76760505e8d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.615484 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30206279-8386-4575-9457-e76760505e8d-kube-api-access-qhdb9" (OuterVolumeSpecName: "kube-api-access-qhdb9") pod "30206279-8386-4575-9457-e76760505e8d" (UID: "30206279-8386-4575-9457-e76760505e8d"). InnerVolumeSpecName "kube-api-access-qhdb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.619868 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-scripts" (OuterVolumeSpecName: "scripts") pod "30206279-8386-4575-9457-e76760505e8d" (UID: "30206279-8386-4575-9457-e76760505e8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.648795 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30206279-8386-4575-9457-e76760505e8d" (UID: "30206279-8386-4575-9457-e76760505e8d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.708940 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30206279-8386-4575-9457-e76760505e8d" (UID: "30206279-8386-4575-9457-e76760505e8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.713655 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.713760 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.713851 4676 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.713918 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhdb9\" (UniqueName: \"kubernetes.io/projected/30206279-8386-4575-9457-e76760505e8d-kube-api-access-qhdb9\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.733530 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-config-data" (OuterVolumeSpecName: "config-data") pod "30206279-8386-4575-9457-e76760505e8d" (UID: "30206279-8386-4575-9457-e76760505e8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:38 crc kubenswrapper[4676]: I0124 00:24:38.815457 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30206279-8386-4575-9457-e76760505e8d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.388113 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.391040 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30206279-8386-4575-9457-e76760505e8d","Type":"ContainerDied","Data":"27bde3b4f936759dfdde3e8fe3259a873617d7354ea2e22a706cd0edd462041d"} Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.391099 4676 scope.go:117] "RemoveContainer" containerID="cf2f47547ef5fe4c3cd5d6009235ea3a947829e0913a6e11bb70499604aaa440" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.425910 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.428171 4676 scope.go:117] "RemoveContainer" containerID="993ed8d33422939d102af22915be4b96f85bac159232e43af92e15ee1383a46a" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.436643 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.446096 4676 scope.go:117] "RemoveContainer" containerID="74fc38a94fa794a834f28752c4c0e574b0f6bd303f04b96af63769a91e8933fe" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.465152 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:24:39 crc kubenswrapper[4676]: E0124 00:24:39.465599 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.465621 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" Jan 24 00:24:39 crc kubenswrapper[4676]: E0124 00:24:39.465633 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="ceilometer-central-agent" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.465644 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="ceilometer-central-agent" Jan 24 00:24:39 crc kubenswrapper[4676]: E0124 00:24:39.465673 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="proxy-httpd" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.465682 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="proxy-httpd" Jan 24 00:24:39 crc kubenswrapper[4676]: E0124 00:24:39.465696 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="ceilometer-notification-agent" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.465704 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="ceilometer-notification-agent" Jan 24 00:24:39 crc kubenswrapper[4676]: E0124 00:24:39.465716 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon-log" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.469724 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon-log" Jan 24 00:24:39 crc kubenswrapper[4676]: E0124 00:24:39.469747 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.469755 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" Jan 24 00:24:39 crc kubenswrapper[4676]: E0124 00:24:39.469769 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="sg-core" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.469777 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="sg-core" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.470084 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon-log" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.470116 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.470128 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="sg-core" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.470143 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="proxy-httpd" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.470156 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="ceilometer-central-agent" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.470173 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="30206279-8386-4575-9457-e76760505e8d" containerName="ceilometer-notification-agent" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.470634 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2451a0-4896-46e4-9b9e-e309ccdf02f2" containerName="horizon" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.470795 4676 scope.go:117] "RemoveContainer" containerID="acf37fd15df58f0c480e8ca3019535bc0893909734220600f3f004d89aa3edc9" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.476042 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.476599 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.481965 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.481985 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.482247 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.528859 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.528921 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-config-data\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.528951 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-log-httpd\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.529020 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.529055 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-scripts\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.529169 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2lz\" (UniqueName: \"kubernetes.io/projected/3c825a1a-2a44-4f85-9271-0614aa8a07a4-kube-api-access-nq2lz\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.529194 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.529211 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-run-httpd\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.630629 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.630686 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-scripts\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.630765 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2lz\" (UniqueName: \"kubernetes.io/projected/3c825a1a-2a44-4f85-9271-0614aa8a07a4-kube-api-access-nq2lz\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.630789 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.630809 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-run-httpd\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.630830 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.630852 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-config-data\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.630873 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-log-httpd\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.631357 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-log-httpd\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.631485 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-run-httpd\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.635116 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-scripts\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.636621 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-config-data\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.637446 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.637947 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.639777 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.663314 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2lz\" (UniqueName: \"kubernetes.io/projected/3c825a1a-2a44-4f85-9271-0614aa8a07a4-kube-api-access-nq2lz\") pod \"ceilometer-0\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " pod="openstack/ceilometer-0" Jan 24 00:24:39 crc kubenswrapper[4676]: I0124 00:24:39.810696 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:24:40 crc kubenswrapper[4676]: I0124 00:24:40.270562 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30206279-8386-4575-9457-e76760505e8d" path="/var/lib/kubelet/pods/30206279-8386-4575-9457-e76760505e8d/volumes" Jan 24 00:24:40 crc kubenswrapper[4676]: I0124 00:24:40.383143 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:24:40 crc kubenswrapper[4676]: I0124 00:24:40.647088 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 00:24:40 crc kubenswrapper[4676]: I0124 00:24:40.733344 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 24 00:24:41 crc kubenswrapper[4676]: I0124 00:24:41.413227 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerStarted","Data":"779dd89d442e8cfef9c213f780a757ec699bc0eef79a17d12ff2edf42b44250b"} Jan 24 00:24:41 crc kubenswrapper[4676]: I0124 00:24:41.413600 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerStarted","Data":"9944d0b2b4ee648d3da07eb6834b39006236c631ca51eb113778101cb554219b"} Jan 24 00:24:42 crc kubenswrapper[4676]: I0124 00:24:42.423488 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerStarted","Data":"33f9fd41388e48671ab7c03de55b7972b9dfd2592bc4af417c289b099fe20556"} Jan 24 00:24:43 crc kubenswrapper[4676]: I0124 00:24:43.435403 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerStarted","Data":"e7c4de9128cd6e6e3de76a1907fc12bdd1ffdc61da41090dc5b038476ebe4f1a"} Jan 24 00:24:44 crc kubenswrapper[4676]: I0124 00:24:44.444450 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerStarted","Data":"99145c476bd92edc34903b04d49544d24801d588d3c6e0a3159018baae1cfdbf"} Jan 24 00:24:44 crc kubenswrapper[4676]: I0124 00:24:44.445929 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 00:24:44 crc kubenswrapper[4676]: I0124 00:24:44.471143 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8863444390000002 podStartE2EDuration="5.471125986s" podCreationTimestamp="2026-01-24 00:24:39 +0000 UTC" firstStartedPulling="2026-01-24 00:24:40.392638072 +0000 UTC m=+1264.422609093" lastFinishedPulling="2026-01-24 00:24:43.977419629 +0000 UTC m=+1268.007390640" observedRunningTime="2026-01-24 00:24:44.470003182 +0000 UTC m=+1268.499974193" watchObservedRunningTime="2026-01-24 00:24:44.471125986 +0000 UTC m=+1268.501096977" Jan 24 00:24:44 crc kubenswrapper[4676]: I0124 00:24:44.624233 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 24 00:24:45 crc kubenswrapper[4676]: I0124 00:24:45.647343 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 00:24:45 crc kubenswrapper[4676]: I0124 00:24:45.697336 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 00:24:46 crc kubenswrapper[4676]: I0124 00:24:46.505350 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 00:24:46 crc kubenswrapper[4676]: I0124 00:24:46.986090 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 00:24:46 crc kubenswrapper[4676]: I0124 00:24:46.986149 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 00:24:48 crc kubenswrapper[4676]: I0124 00:24:48.069752 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 00:24:48 crc kubenswrapper[4676]: I0124 00:24:48.069848 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 00:24:53 crc kubenswrapper[4676]: E0124 00:24:53.385073 4676 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84633021_2bad_4066_a01c_44fef6902524.slice/crio-conmon-6301d47d4294644c70fd618b97e56c019270427fbdab822047ac3bfe88e7e594.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84633021_2bad_4066_a01c_44fef6902524.slice/crio-6301d47d4294644c70fd618b97e56c019270427fbdab822047ac3bfe88e7e594.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65324a0f_f62a_4807_85d2_e4b607b2a0b4.slice/crio-e4451937240be2f867c73971e845e8f6b7d4b6bee35057ae40b7f593e4b5d73c.scope\": RecentStats: unable to find data in memory cache]" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.534715 4676 generic.go:334] "Generic (PLEG): container finished" podID="65324a0f-f62a-4807-85d2-e4b607b2a0b4" containerID="e4451937240be2f867c73971e845e8f6b7d4b6bee35057ae40b7f593e4b5d73c" exitCode=137 Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.534806 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65324a0f-f62a-4807-85d2-e4b607b2a0b4","Type":"ContainerDied","Data":"e4451937240be2f867c73971e845e8f6b7d4b6bee35057ae40b7f593e4b5d73c"} Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.537035 4676 generic.go:334] "Generic (PLEG): container finished" podID="84633021-2bad-4066-a01c-44fef6902524" containerID="6301d47d4294644c70fd618b97e56c019270427fbdab822047ac3bfe88e7e594" exitCode=137 Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.537161 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84633021-2bad-4066-a01c-44fef6902524","Type":"ContainerDied","Data":"6301d47d4294644c70fd618b97e56c019270427fbdab822047ac3bfe88e7e594"} Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.537262 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84633021-2bad-4066-a01c-44fef6902524","Type":"ContainerDied","Data":"6e82aa991fb418472d65f2acfb31fecbeef7f12dbcdb1246d40b5110c0b6ab65"} Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.537344 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e82aa991fb418472d65f2acfb31fecbeef7f12dbcdb1246d40b5110c0b6ab65" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.642789 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.650774 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.809085 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9hf5\" (UniqueName: \"kubernetes.io/projected/65324a0f-f62a-4807-85d2-e4b607b2a0b4-kube-api-access-p9hf5\") pod \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.810326 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-config-data\") pod \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.810492 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84633021-2bad-4066-a01c-44fef6902524-logs\") pod \"84633021-2bad-4066-a01c-44fef6902524\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.810948 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f77n\" (UniqueName: \"kubernetes.io/projected/84633021-2bad-4066-a01c-44fef6902524-kube-api-access-8f77n\") pod \"84633021-2bad-4066-a01c-44fef6902524\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.811604 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-combined-ca-bundle\") pod \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\" (UID: \"65324a0f-f62a-4807-85d2-e4b607b2a0b4\") " Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.811748 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-combined-ca-bundle\") pod \"84633021-2bad-4066-a01c-44fef6902524\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.811895 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-config-data\") pod \"84633021-2bad-4066-a01c-44fef6902524\" (UID: \"84633021-2bad-4066-a01c-44fef6902524\") " Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.811139 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84633021-2bad-4066-a01c-44fef6902524-logs" (OuterVolumeSpecName: "logs") pod "84633021-2bad-4066-a01c-44fef6902524" (UID: "84633021-2bad-4066-a01c-44fef6902524"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.812891 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84633021-2bad-4066-a01c-44fef6902524-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.814875 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65324a0f-f62a-4807-85d2-e4b607b2a0b4-kube-api-access-p9hf5" (OuterVolumeSpecName: "kube-api-access-p9hf5") pod "65324a0f-f62a-4807-85d2-e4b607b2a0b4" (UID: "65324a0f-f62a-4807-85d2-e4b607b2a0b4"). InnerVolumeSpecName "kube-api-access-p9hf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.816911 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84633021-2bad-4066-a01c-44fef6902524-kube-api-access-8f77n" (OuterVolumeSpecName: "kube-api-access-8f77n") pod "84633021-2bad-4066-a01c-44fef6902524" (UID: "84633021-2bad-4066-a01c-44fef6902524"). InnerVolumeSpecName "kube-api-access-8f77n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.839440 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-config-data" (OuterVolumeSpecName: "config-data") pod "84633021-2bad-4066-a01c-44fef6902524" (UID: "84633021-2bad-4066-a01c-44fef6902524"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.842254 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-config-data" (OuterVolumeSpecName: "config-data") pod "65324a0f-f62a-4807-85d2-e4b607b2a0b4" (UID: "65324a0f-f62a-4807-85d2-e4b607b2a0b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.847522 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84633021-2bad-4066-a01c-44fef6902524" (UID: "84633021-2bad-4066-a01c-44fef6902524"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.852457 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65324a0f-f62a-4807-85d2-e4b607b2a0b4" (UID: "65324a0f-f62a-4807-85d2-e4b607b2a0b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.915352 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.915408 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9hf5\" (UniqueName: \"kubernetes.io/projected/65324a0f-f62a-4807-85d2-e4b607b2a0b4-kube-api-access-p9hf5\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.915426 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.915438 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f77n\" (UniqueName: \"kubernetes.io/projected/84633021-2bad-4066-a01c-44fef6902524-kube-api-access-8f77n\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.915450 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65324a0f-f62a-4807-85d2-e4b607b2a0b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:53 crc kubenswrapper[4676]: I0124 00:24:53.915464 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84633021-2bad-4066-a01c-44fef6902524-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.549197 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65324a0f-f62a-4807-85d2-e4b607b2a0b4","Type":"ContainerDied","Data":"9d99431ded08302b9fd0007acf0016950fd97f2998e3a74b9630c85127cb16f9"} Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.549552 4676 scope.go:117] "RemoveContainer" containerID="e4451937240be2f867c73971e845e8f6b7d4b6bee35057ae40b7f593e4b5d73c" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.549242 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.550986 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.584224 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.602590 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.623433 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.632425 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.652104 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:54 crc kubenswrapper[4676]: E0124 00:24:54.652609 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84633021-2bad-4066-a01c-44fef6902524" containerName="nova-metadata-log" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.652627 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="84633021-2bad-4066-a01c-44fef6902524" containerName="nova-metadata-log" Jan 24 00:24:54 crc kubenswrapper[4676]: E0124 00:24:54.652640 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84633021-2bad-4066-a01c-44fef6902524" containerName="nova-metadata-metadata" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.652648 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="84633021-2bad-4066-a01c-44fef6902524" containerName="nova-metadata-metadata" Jan 24 00:24:54 crc kubenswrapper[4676]: E0124 00:24:54.652668 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65324a0f-f62a-4807-85d2-e4b607b2a0b4" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.652680 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="65324a0f-f62a-4807-85d2-e4b607b2a0b4" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.652906 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="84633021-2bad-4066-a01c-44fef6902524" containerName="nova-metadata-metadata" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.652935 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="65324a0f-f62a-4807-85d2-e4b607b2a0b4" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.652945 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="84633021-2bad-4066-a01c-44fef6902524" containerName="nova-metadata-log" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.653653 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.665875 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.666113 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.666353 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.678507 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.685247 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.687145 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.691474 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.691677 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.695746 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.732940 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733006 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6784012-a1a0-4647-929f-50788a8dc5bb-logs\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733051 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733075 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733118 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733152 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-config-data\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733185 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733209 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prt8v\" (UniqueName: \"kubernetes.io/projected/b1499ed8-1041-4980-b9da-cb957cbf215c-kube-api-access-prt8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733231 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.733272 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4dfw\" (UniqueName: \"kubernetes.io/projected/d6784012-a1a0-4647-929f-50788a8dc5bb-kube-api-access-p4dfw\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834279 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4dfw\" (UniqueName: \"kubernetes.io/projected/d6784012-a1a0-4647-929f-50788a8dc5bb-kube-api-access-p4dfw\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834340 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834393 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6784012-a1a0-4647-929f-50788a8dc5bb-logs\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834434 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834454 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834491 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834519 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-config-data\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834542 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834560 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prt8v\" (UniqueName: \"kubernetes.io/projected/b1499ed8-1041-4980-b9da-cb957cbf215c-kube-api-access-prt8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.834580 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.835279 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6784012-a1a0-4647-929f-50788a8dc5bb-logs\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.838941 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.841459 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-config-data\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.842866 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.846760 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.846841 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.847897 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.851036 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1499ed8-1041-4980-b9da-cb957cbf215c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.853010 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4dfw\" (UniqueName: \"kubernetes.io/projected/d6784012-a1a0-4647-929f-50788a8dc5bb-kube-api-access-p4dfw\") pod \"nova-metadata-0\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " pod="openstack/nova-metadata-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.860624 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prt8v\" (UniqueName: \"kubernetes.io/projected/b1499ed8-1041-4980-b9da-cb957cbf215c-kube-api-access-prt8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1499ed8-1041-4980-b9da-cb957cbf215c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:54 crc kubenswrapper[4676]: I0124 00:24:54.983846 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:24:55 crc kubenswrapper[4676]: I0124 00:24:55.026322 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:24:55 crc kubenswrapper[4676]: I0124 00:24:55.519864 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 00:24:55 crc kubenswrapper[4676]: W0124 00:24:55.530642 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1499ed8_1041_4980_b9da_cb957cbf215c.slice/crio-6c92d02f4ee03a49f06a51e0afb40d6b241270cd7d28e0e7dfe00dcde1fa435f WatchSource:0}: Error finding container 6c92d02f4ee03a49f06a51e0afb40d6b241270cd7d28e0e7dfe00dcde1fa435f: Status 404 returned error can't find the container with id 6c92d02f4ee03a49f06a51e0afb40d6b241270cd7d28e0e7dfe00dcde1fa435f Jan 24 00:24:55 crc kubenswrapper[4676]: I0124 00:24:55.539344 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:24:55 crc kubenswrapper[4676]: I0124 00:24:55.562257 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b1499ed8-1041-4980-b9da-cb957cbf215c","Type":"ContainerStarted","Data":"6c92d02f4ee03a49f06a51e0afb40d6b241270cd7d28e0e7dfe00dcde1fa435f"} Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.267890 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65324a0f-f62a-4807-85d2-e4b607b2a0b4" path="/var/lib/kubelet/pods/65324a0f-f62a-4807-85d2-e4b607b2a0b4/volumes" Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.268962 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84633021-2bad-4066-a01c-44fef6902524" path="/var/lib/kubelet/pods/84633021-2bad-4066-a01c-44fef6902524/volumes" Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.578333 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b1499ed8-1041-4980-b9da-cb957cbf215c","Type":"ContainerStarted","Data":"bbe545070675448028e845841b263321ae62b0b7655e0707a2be07d263d064e6"} Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.583698 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6784012-a1a0-4647-929f-50788a8dc5bb","Type":"ContainerStarted","Data":"069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b"} Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.583741 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6784012-a1a0-4647-929f-50788a8dc5bb","Type":"ContainerStarted","Data":"dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1"} Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.583751 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6784012-a1a0-4647-929f-50788a8dc5bb","Type":"ContainerStarted","Data":"01fe7e14f549bff2abe35a4d5403466f7bbd36dfe57daa6ef4e421db5dcd7cf7"} Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.605069 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.605042322 podStartE2EDuration="2.605042322s" podCreationTimestamp="2026-01-24 00:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:56.601279567 +0000 UTC m=+1280.631250598" watchObservedRunningTime="2026-01-24 00:24:56.605042322 +0000 UTC m=+1280.635013363" Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.635431 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.635408118 podStartE2EDuration="2.635408118s" podCreationTimestamp="2026-01-24 00:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:56.631283852 +0000 UTC m=+1280.661254853" watchObservedRunningTime="2026-01-24 00:24:56.635408118 +0000 UTC m=+1280.665379129" Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.988940 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.989355 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.989689 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.989815 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 00:24:56 crc kubenswrapper[4676]: I0124 00:24:56.998729 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.003554 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.178249 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tkk74"] Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.179639 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.200087 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tkk74"] Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.278999 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.279172 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.279196 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.279227 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-config\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.279243 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.279326 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5hbp\" (UniqueName: \"kubernetes.io/projected/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-kube-api-access-s5hbp\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.381182 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.381305 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.381328 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.381348 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-config\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.381366 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.381423 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5hbp\" (UniqueName: \"kubernetes.io/projected/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-kube-api-access-s5hbp\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.382390 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.382430 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.382466 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.383390 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.383725 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-config\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.402403 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5hbp\" (UniqueName: \"kubernetes.io/projected/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-kube-api-access-s5hbp\") pod \"dnsmasq-dns-cd5cbd7b9-tkk74\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:57 crc kubenswrapper[4676]: I0124 00:24:57.496844 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:24:58 crc kubenswrapper[4676]: I0124 00:24:58.231222 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tkk74"] Jan 24 00:24:58 crc kubenswrapper[4676]: W0124 00:24:58.234054 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a3231ad_bc2c_4a51_813d_cfd1a11c4fc4.slice/crio-85858d9bee0a25807ae1aa70febaf41556800347d26ef61522b473ffb2662a4f WatchSource:0}: Error finding container 85858d9bee0a25807ae1aa70febaf41556800347d26ef61522b473ffb2662a4f: Status 404 returned error can't find the container with id 85858d9bee0a25807ae1aa70febaf41556800347d26ef61522b473ffb2662a4f Jan 24 00:24:58 crc kubenswrapper[4676]: I0124 00:24:58.619675 4676 generic.go:334] "Generic (PLEG): container finished" podID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" containerID="e9acf51095b7dd6a0ad30ae069a3cb2ce0ad324415a9a8bc556468f57ab1c34d" exitCode=0 Jan 24 00:24:58 crc kubenswrapper[4676]: I0124 00:24:58.619779 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" event={"ID":"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4","Type":"ContainerDied","Data":"e9acf51095b7dd6a0ad30ae069a3cb2ce0ad324415a9a8bc556468f57ab1c34d"} Jan 24 00:24:58 crc kubenswrapper[4676]: I0124 00:24:58.619916 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" event={"ID":"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4","Type":"ContainerStarted","Data":"85858d9bee0a25807ae1aa70febaf41556800347d26ef61522b473ffb2662a4f"} Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.518598 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.628501 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" event={"ID":"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4","Type":"ContainerStarted","Data":"8189f6b4420a2e8c611c03a3298dcf14f84845a898137968cf5f45e6c0c37685"} Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.628622 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-log" containerID="cri-o://3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1" gracePeriod=30 Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.629119 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-api" containerID="cri-o://261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0" gracePeriod=30 Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.661992 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" podStartSLOduration=2.66197296 podStartE2EDuration="2.66197296s" podCreationTimestamp="2026-01-24 00:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:24:59.65343571 +0000 UTC m=+1283.683406711" watchObservedRunningTime="2026-01-24 00:24:59.66197296 +0000 UTC m=+1283.691943961" Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.888983 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.889644 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="ceilometer-central-agent" containerID="cri-o://779dd89d442e8cfef9c213f780a757ec699bc0eef79a17d12ff2edf42b44250b" gracePeriod=30 Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.890489 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="proxy-httpd" containerID="cri-o://99145c476bd92edc34903b04d49544d24801d588d3c6e0a3159018baae1cfdbf" gracePeriod=30 Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.890559 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="sg-core" containerID="cri-o://e7c4de9128cd6e6e3de76a1907fc12bdd1ffdc61da41090dc5b038476ebe4f1a" gracePeriod=30 Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.890606 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="ceilometer-notification-agent" containerID="cri-o://33f9fd41388e48671ab7c03de55b7972b9dfd2592bc4af417c289b099fe20556" gracePeriod=30 Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.912732 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.193:3000/\": EOF" Jan 24 00:24:59 crc kubenswrapper[4676]: I0124 00:24:59.984788 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.026925 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.027124 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.638947 4676 generic.go:334] "Generic (PLEG): container finished" podID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerID="99145c476bd92edc34903b04d49544d24801d588d3c6e0a3159018baae1cfdbf" exitCode=0 Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.639939 4676 generic.go:334] "Generic (PLEG): container finished" podID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerID="e7c4de9128cd6e6e3de76a1907fc12bdd1ffdc61da41090dc5b038476ebe4f1a" exitCode=2 Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.638987 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerDied","Data":"99145c476bd92edc34903b04d49544d24801d588d3c6e0a3159018baae1cfdbf"} Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.640042 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerDied","Data":"e7c4de9128cd6e6e3de76a1907fc12bdd1ffdc61da41090dc5b038476ebe4f1a"} Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.640057 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerDied","Data":"779dd89d442e8cfef9c213f780a757ec699bc0eef79a17d12ff2edf42b44250b"} Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.640015 4676 generic.go:334] "Generic (PLEG): container finished" podID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerID="779dd89d442e8cfef9c213f780a757ec699bc0eef79a17d12ff2edf42b44250b" exitCode=0 Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.641862 4676 generic.go:334] "Generic (PLEG): container finished" podID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerID="3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1" exitCode=143 Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.642020 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a8b8762-d78d-44ad-af70-5d53aced9c93","Type":"ContainerDied","Data":"3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1"} Jan 24 00:25:00 crc kubenswrapper[4676]: I0124 00:25:00.642138 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:25:02 crc kubenswrapper[4676]: I0124 00:25:02.697355 4676 generic.go:334] "Generic (PLEG): container finished" podID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerID="33f9fd41388e48671ab7c03de55b7972b9dfd2592bc4af417c289b099fe20556" exitCode=0 Jan 24 00:25:02 crc kubenswrapper[4676]: I0124 00:25:02.697473 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerDied","Data":"33f9fd41388e48671ab7c03de55b7972b9dfd2592bc4af417c289b099fe20556"} Jan 24 00:25:02 crc kubenswrapper[4676]: I0124 00:25:02.975493 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.096536 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-config-data\") pod \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.097083 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq2lz\" (UniqueName: \"kubernetes.io/projected/3c825a1a-2a44-4f85-9271-0614aa8a07a4-kube-api-access-nq2lz\") pod \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.097149 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-log-httpd\") pod \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.097220 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-scripts\") pod \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.097271 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-run-httpd\") pod \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.097770 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3c825a1a-2a44-4f85-9271-0614aa8a07a4" (UID: "3c825a1a-2a44-4f85-9271-0614aa8a07a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.098368 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3c825a1a-2a44-4f85-9271-0614aa8a07a4" (UID: "3c825a1a-2a44-4f85-9271-0614aa8a07a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.098522 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-combined-ca-bundle\") pod \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.098575 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-sg-core-conf-yaml\") pod \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.098642 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-ceilometer-tls-certs\") pod \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\" (UID: \"3c825a1a-2a44-4f85-9271-0614aa8a07a4\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.099644 4676 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.099671 4676 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c825a1a-2a44-4f85-9271-0614aa8a07a4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.122799 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-scripts" (OuterVolumeSpecName: "scripts") pod "3c825a1a-2a44-4f85-9271-0614aa8a07a4" (UID: "3c825a1a-2a44-4f85-9271-0614aa8a07a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.128095 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c825a1a-2a44-4f85-9271-0614aa8a07a4-kube-api-access-nq2lz" (OuterVolumeSpecName: "kube-api-access-nq2lz") pod "3c825a1a-2a44-4f85-9271-0614aa8a07a4" (UID: "3c825a1a-2a44-4f85-9271-0614aa8a07a4"). InnerVolumeSpecName "kube-api-access-nq2lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.182428 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3c825a1a-2a44-4f85-9271-0614aa8a07a4" (UID: "3c825a1a-2a44-4f85-9271-0614aa8a07a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.202833 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq2lz\" (UniqueName: \"kubernetes.io/projected/3c825a1a-2a44-4f85-9271-0614aa8a07a4-kube-api-access-nq2lz\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.202862 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.202872 4676 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.211043 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3c825a1a-2a44-4f85-9271-0614aa8a07a4" (UID: "3c825a1a-2a44-4f85-9271-0614aa8a07a4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.231007 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c825a1a-2a44-4f85-9271-0614aa8a07a4" (UID: "3c825a1a-2a44-4f85-9271-0614aa8a07a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.243571 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.264935 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-config-data" (OuterVolumeSpecName: "config-data") pod "3c825a1a-2a44-4f85-9271-0614aa8a07a4" (UID: "3c825a1a-2a44-4f85-9271-0614aa8a07a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.307762 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwbq\" (UniqueName: \"kubernetes.io/projected/3a8b8762-d78d-44ad-af70-5d53aced9c93-kube-api-access-zbwbq\") pod \"3a8b8762-d78d-44ad-af70-5d53aced9c93\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.308476 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a8b8762-d78d-44ad-af70-5d53aced9c93-logs\") pod \"3a8b8762-d78d-44ad-af70-5d53aced9c93\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.308527 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-config-data\") pod \"3a8b8762-d78d-44ad-af70-5d53aced9c93\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.308701 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-combined-ca-bundle\") pod \"3a8b8762-d78d-44ad-af70-5d53aced9c93\" (UID: \"3a8b8762-d78d-44ad-af70-5d53aced9c93\") " Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.308861 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8b8762-d78d-44ad-af70-5d53aced9c93-logs" (OuterVolumeSpecName: "logs") pod "3a8b8762-d78d-44ad-af70-5d53aced9c93" (UID: "3a8b8762-d78d-44ad-af70-5d53aced9c93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.309702 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a8b8762-d78d-44ad-af70-5d53aced9c93-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.309717 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.309726 4676 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.309746 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c825a1a-2a44-4f85-9271-0614aa8a07a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.329436 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8b8762-d78d-44ad-af70-5d53aced9c93-kube-api-access-zbwbq" (OuterVolumeSpecName: "kube-api-access-zbwbq") pod "3a8b8762-d78d-44ad-af70-5d53aced9c93" (UID: "3a8b8762-d78d-44ad-af70-5d53aced9c93"). InnerVolumeSpecName "kube-api-access-zbwbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.334392 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a8b8762-d78d-44ad-af70-5d53aced9c93" (UID: "3a8b8762-d78d-44ad-af70-5d53aced9c93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.348215 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-config-data" (OuterVolumeSpecName: "config-data") pod "3a8b8762-d78d-44ad-af70-5d53aced9c93" (UID: "3a8b8762-d78d-44ad-af70-5d53aced9c93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.411293 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.411325 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbwbq\" (UniqueName: \"kubernetes.io/projected/3a8b8762-d78d-44ad-af70-5d53aced9c93-kube-api-access-zbwbq\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.411335 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b8762-d78d-44ad-af70-5d53aced9c93-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.710761 4676 generic.go:334] "Generic (PLEG): container finished" podID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerID="261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0" exitCode=0 Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.710836 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a8b8762-d78d-44ad-af70-5d53aced9c93","Type":"ContainerDied","Data":"261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0"} Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.710866 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a8b8762-d78d-44ad-af70-5d53aced9c93","Type":"ContainerDied","Data":"000b1ff3c0019bc3bb4c62b73efcef9044b9001595b681004d951af702c62578"} Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.710886 4676 scope.go:117] "RemoveContainer" containerID="261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.710885 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.716463 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c825a1a-2a44-4f85-9271-0614aa8a07a4","Type":"ContainerDied","Data":"9944d0b2b4ee648d3da07eb6834b39006236c631ca51eb113778101cb554219b"} Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.716552 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.751477 4676 scope.go:117] "RemoveContainer" containerID="3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.755073 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.766593 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.790437 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:03 crc kubenswrapper[4676]: E0124 00:25:03.790875 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-api" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.790897 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-api" Jan 24 00:25:03 crc kubenswrapper[4676]: E0124 00:25:03.790914 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-log" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.790922 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-log" Jan 24 00:25:03 crc kubenswrapper[4676]: E0124 00:25:03.790938 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="ceilometer-central-agent" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.790946 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="ceilometer-central-agent" Jan 24 00:25:03 crc kubenswrapper[4676]: E0124 00:25:03.790960 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="ceilometer-notification-agent" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.790969 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="ceilometer-notification-agent" Jan 24 00:25:03 crc kubenswrapper[4676]: E0124 00:25:03.790994 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="sg-core" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.791002 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="sg-core" Jan 24 00:25:03 crc kubenswrapper[4676]: E0124 00:25:03.791023 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="proxy-httpd" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.791032 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="proxy-httpd" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.791235 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="ceilometer-notification-agent" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.791254 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="sg-core" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.791268 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-log" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.791282 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" containerName="nova-api-api" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.791299 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="ceilometer-central-agent" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.791311 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" containerName="proxy-httpd" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.792463 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.793138 4676 scope.go:117] "RemoveContainer" containerID="261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.796172 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.796340 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 00:25:03 crc kubenswrapper[4676]: E0124 00:25:03.796554 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0\": container with ID starting with 261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0 not found: ID does not exist" containerID="261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.796598 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0"} err="failed to get container status \"261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0\": rpc error: code = NotFound desc = could not find container \"261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0\": container with ID starting with 261a0919f777a0b18febac5db55c7874e12a7dc53a2edb25ac33fa7937a21ad0 not found: ID does not exist" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.796627 4676 scope.go:117] "RemoveContainer" containerID="3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1" Jan 24 00:25:03 crc kubenswrapper[4676]: E0124 00:25:03.797909 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1\": container with ID starting with 3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1 not found: ID does not exist" containerID="3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.797944 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1"} err="failed to get container status \"3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1\": rpc error: code = NotFound desc = could not find container \"3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1\": container with ID starting with 3d1351cf150d3d0db98e68d8cbd4da0fc2fc3e8bd6a7873c16725d321e9a9ab1 not found: ID does not exist" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.797971 4676 scope.go:117] "RemoveContainer" containerID="99145c476bd92edc34903b04d49544d24801d588d3c6e0a3159018baae1cfdbf" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.803913 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.805521 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.815009 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.821609 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-public-tls-certs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.821653 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-logs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.821744 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.821768 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-config-data\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.821820 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbxs\" (UniqueName: \"kubernetes.io/projected/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-kube-api-access-cdbxs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.821912 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.829251 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.831779 4676 scope.go:117] "RemoveContainer" containerID="e7c4de9128cd6e6e3de76a1907fc12bdd1ffdc61da41090dc5b038476ebe4f1a" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.860990 4676 scope.go:117] "RemoveContainer" containerID="33f9fd41388e48671ab7c03de55b7972b9dfd2592bc4af417c289b099fe20556" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.873974 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.876662 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.894134 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.894214 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.894349 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.913072 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.923632 4676 scope.go:117] "RemoveContainer" containerID="779dd89d442e8cfef9c213f780a757ec699bc0eef79a17d12ff2edf42b44250b" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.924635 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.924670 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-public-tls-certs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.924695 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-logs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.924752 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.924771 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-config-data\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.924805 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdbxs\" (UniqueName: \"kubernetes.io/projected/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-kube-api-access-cdbxs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.928579 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-logs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.933353 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.943326 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdbxs\" (UniqueName: \"kubernetes.io/projected/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-kube-api-access-cdbxs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.943947 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-config-data\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.952328 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-public-tls-certs\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:03 crc kubenswrapper[4676]: I0124 00:25:03.956009 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " pod="openstack/nova-api-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.039843 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-scripts\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.039942 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.039981 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58bbd77a-9518-4037-b96b-a1490082fb04-run-httpd\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.040122 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7ks\" (UniqueName: \"kubernetes.io/projected/58bbd77a-9518-4037-b96b-a1490082fb04-kube-api-access-dh7ks\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.040329 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.040367 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58bbd77a-9518-4037-b96b-a1490082fb04-log-httpd\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.040405 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-config-data\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.040436 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.127197 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.141943 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.142153 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58bbd77a-9518-4037-b96b-a1490082fb04-log-httpd\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.142233 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-config-data\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.142330 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.142448 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-scripts\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.142540 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.142614 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58bbd77a-9518-4037-b96b-a1490082fb04-run-httpd\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.142697 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58bbd77a-9518-4037-b96b-a1490082fb04-log-httpd\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.142787 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh7ks\" (UniqueName: \"kubernetes.io/projected/58bbd77a-9518-4037-b96b-a1490082fb04-kube-api-access-dh7ks\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.143780 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58bbd77a-9518-4037-b96b-a1490082fb04-run-httpd\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.146291 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-config-data\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.146453 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.147930 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-scripts\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.149892 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.152930 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58bbd77a-9518-4037-b96b-a1490082fb04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.174304 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh7ks\" (UniqueName: \"kubernetes.io/projected/58bbd77a-9518-4037-b96b-a1490082fb04-kube-api-access-dh7ks\") pod \"ceilometer-0\" (UID: \"58bbd77a-9518-4037-b96b-a1490082fb04\") " pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.198968 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.309760 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8b8762-d78d-44ad-af70-5d53aced9c93" path="/var/lib/kubelet/pods/3a8b8762-d78d-44ad-af70-5d53aced9c93/volumes" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.314686 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c825a1a-2a44-4f85-9271-0614aa8a07a4" path="/var/lib/kubelet/pods/3c825a1a-2a44-4f85-9271-0614aa8a07a4/volumes" Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.631890 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.727755 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace","Type":"ContainerStarted","Data":"21b2d30f9d7e2c4a549d8c87dd63e0002ace80a317cb0d0205c94126fe98f6c1"} Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.808657 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 00:25:04 crc kubenswrapper[4676]: I0124 00:25:04.985115 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.007906 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.027243 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.027446 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.736804 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace","Type":"ContainerStarted","Data":"01fc81c9856cfd2ef13df58946b52afdb5cba674fdb4423a36d9a11f6c166109"} Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.737048 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace","Type":"ContainerStarted","Data":"71c7f3216e344862d8255e5daeef618a41a9f2fd2e7332ca45fe189abdb2cd5c"} Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.740426 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58bbd77a-9518-4037-b96b-a1490082fb04","Type":"ContainerStarted","Data":"8269b292deaebc44d46a82bcbd5a9052216ab55fce7ada8f29a4aafd48fa5ccb"} Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.740452 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58bbd77a-9518-4037-b96b-a1490082fb04","Type":"ContainerStarted","Data":"1450a27e6740c0748d4cf938867267b47c8cf60f863dd86e12df098c85f250cc"} Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.760732 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.760717717 podStartE2EDuration="2.760717717s" podCreationTimestamp="2026-01-24 00:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:05.757239711 +0000 UTC m=+1289.787210702" watchObservedRunningTime="2026-01-24 00:25:05.760717717 +0000 UTC m=+1289.790688718" Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.774636 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.928757 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-49926"] Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.929919 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.937985 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-49926"] Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.940076 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 24 00:25:05 crc kubenswrapper[4676]: I0124 00:25:05.940471 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.041627 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.041675 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.078520 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.078598 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-scripts\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.078786 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-config-data\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.078951 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5hxl\" (UniqueName: \"kubernetes.io/projected/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-kube-api-access-q5hxl\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.180524 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5hxl\" (UniqueName: \"kubernetes.io/projected/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-kube-api-access-q5hxl\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.180678 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.180699 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-scripts\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.180731 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-config-data\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.186113 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-config-data\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.186690 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.190990 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-scripts\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.200277 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5hxl\" (UniqueName: \"kubernetes.io/projected/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-kube-api-access-q5hxl\") pod \"nova-cell1-cell-mapping-49926\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.253623 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.744367 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-49926"] Jan 24 00:25:06 crc kubenswrapper[4676]: W0124 00:25:06.756906 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e6d63aa_e36a_4ef7_b50d_255b44e72c20.slice/crio-11b0104944bb8c5aff79330efedf762b41dcc22f00267801cc45a32cf1468cf9 WatchSource:0}: Error finding container 11b0104944bb8c5aff79330efedf762b41dcc22f00267801cc45a32cf1468cf9: Status 404 returned error can't find the container with id 11b0104944bb8c5aff79330efedf762b41dcc22f00267801cc45a32cf1468cf9 Jan 24 00:25:06 crc kubenswrapper[4676]: I0124 00:25:06.758421 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58bbd77a-9518-4037-b96b-a1490082fb04","Type":"ContainerStarted","Data":"5a9e6225549dfb21345b08b2519c115a0add95915d84a2fb7b60cccef323a6c6"} Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.498894 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.571552 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-72qp2"] Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.572036 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" podUID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" containerName="dnsmasq-dns" containerID="cri-o://2dc83da30d1c8b3f8b6fad2b698ea560b01dbdda3cf6667c49b816b31947008d" gracePeriod=10 Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.782902 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-49926" event={"ID":"2e6d63aa-e36a-4ef7-b50d-255b44e72c20","Type":"ContainerStarted","Data":"d43689494700a5ceee020f09f4ca2f2fd150c60e2e3c7e0d6cd30145e29f275e"} Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.782942 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-49926" event={"ID":"2e6d63aa-e36a-4ef7-b50d-255b44e72c20","Type":"ContainerStarted","Data":"11b0104944bb8c5aff79330efedf762b41dcc22f00267801cc45a32cf1468cf9"} Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.793836 4676 generic.go:334] "Generic (PLEG): container finished" podID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" containerID="2dc83da30d1c8b3f8b6fad2b698ea560b01dbdda3cf6667c49b816b31947008d" exitCode=0 Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.793904 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" event={"ID":"b8c07196-aa1d-4d14-bc9f-6aec4de13853","Type":"ContainerDied","Data":"2dc83da30d1c8b3f8b6fad2b698ea560b01dbdda3cf6667c49b816b31947008d"} Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.814709 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-49926" podStartSLOduration=2.814689528 podStartE2EDuration="2.814689528s" podCreationTimestamp="2026-01-24 00:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:07.811761858 +0000 UTC m=+1291.841732859" watchObservedRunningTime="2026-01-24 00:25:07.814689528 +0000 UTC m=+1291.844660529" Jan 24 00:25:07 crc kubenswrapper[4676]: I0124 00:25:07.816044 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58bbd77a-9518-4037-b96b-a1490082fb04","Type":"ContainerStarted","Data":"5a397a6753309135105283265cae4b39c50a6d461b49380c1661be0d1ddb8dee"} Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.161787 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.330115 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-svc\") pod \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.330177 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-config\") pod \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.330316 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2kxc\" (UniqueName: \"kubernetes.io/projected/b8c07196-aa1d-4d14-bc9f-6aec4de13853-kube-api-access-c2kxc\") pod \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.330333 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-sb\") pod \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.330356 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-swift-storage-0\") pod \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.330418 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-nb\") pod \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\" (UID: \"b8c07196-aa1d-4d14-bc9f-6aec4de13853\") " Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.355883 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c07196-aa1d-4d14-bc9f-6aec4de13853-kube-api-access-c2kxc" (OuterVolumeSpecName: "kube-api-access-c2kxc") pod "b8c07196-aa1d-4d14-bc9f-6aec4de13853" (UID: "b8c07196-aa1d-4d14-bc9f-6aec4de13853"). InnerVolumeSpecName "kube-api-access-c2kxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.409021 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b8c07196-aa1d-4d14-bc9f-6aec4de13853" (UID: "b8c07196-aa1d-4d14-bc9f-6aec4de13853"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.410302 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b8c07196-aa1d-4d14-bc9f-6aec4de13853" (UID: "b8c07196-aa1d-4d14-bc9f-6aec4de13853"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.410798 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b8c07196-aa1d-4d14-bc9f-6aec4de13853" (UID: "b8c07196-aa1d-4d14-bc9f-6aec4de13853"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.413345 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-config" (OuterVolumeSpecName: "config") pod "b8c07196-aa1d-4d14-bc9f-6aec4de13853" (UID: "b8c07196-aa1d-4d14-bc9f-6aec4de13853"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.433424 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b8c07196-aa1d-4d14-bc9f-6aec4de13853" (UID: "b8c07196-aa1d-4d14-bc9f-6aec4de13853"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.434207 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2kxc\" (UniqueName: \"kubernetes.io/projected/b8c07196-aa1d-4d14-bc9f-6aec4de13853-kube-api-access-c2kxc\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.434240 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.434249 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.434257 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.434266 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.434274 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c07196-aa1d-4d14-bc9f-6aec4de13853-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.827818 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" event={"ID":"b8c07196-aa1d-4d14-bc9f-6aec4de13853","Type":"ContainerDied","Data":"149be033f93bb329597c455a0e6c6a8a209721f648e144a6195c8b2dcfc6602d"} Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.827842 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-72qp2" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.827880 4676 scope.go:117] "RemoveContainer" containerID="2dc83da30d1c8b3f8b6fad2b698ea560b01dbdda3cf6667c49b816b31947008d" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.857426 4676 scope.go:117] "RemoveContainer" containerID="ab4cecb7436eb3bdc8075e2778a585be7c7bc22f8b1112d18ec061da1496853f" Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.892596 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-72qp2"] Jan 24 00:25:08 crc kubenswrapper[4676]: I0124 00:25:08.923299 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-72qp2"] Jan 24 00:25:09 crc kubenswrapper[4676]: I0124 00:25:09.839921 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58bbd77a-9518-4037-b96b-a1490082fb04","Type":"ContainerStarted","Data":"795b1c1ddd3e5518853d874ea4d59834c66b9a6cf52194dfcd4ecebaabca0ce1"} Jan 24 00:25:09 crc kubenswrapper[4676]: I0124 00:25:09.840092 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 00:25:09 crc kubenswrapper[4676]: I0124 00:25:09.861321 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.118502461 podStartE2EDuration="6.861306444s" podCreationTimestamp="2026-01-24 00:25:03 +0000 UTC" firstStartedPulling="2026-01-24 00:25:04.822349627 +0000 UTC m=+1288.852320628" lastFinishedPulling="2026-01-24 00:25:08.56515361 +0000 UTC m=+1292.595124611" observedRunningTime="2026-01-24 00:25:09.859831539 +0000 UTC m=+1293.889802560" watchObservedRunningTime="2026-01-24 00:25:09.861306444 +0000 UTC m=+1293.891277445" Jan 24 00:25:10 crc kubenswrapper[4676]: I0124 00:25:10.264660 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" path="/var/lib/kubelet/pods/b8c07196-aa1d-4d14-bc9f-6aec4de13853/volumes" Jan 24 00:25:12 crc kubenswrapper[4676]: I0124 00:25:12.889370 4676 generic.go:334] "Generic (PLEG): container finished" podID="2e6d63aa-e36a-4ef7-b50d-255b44e72c20" containerID="d43689494700a5ceee020f09f4ca2f2fd150c60e2e3c7e0d6cd30145e29f275e" exitCode=0 Jan 24 00:25:12 crc kubenswrapper[4676]: I0124 00:25:12.889470 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-49926" event={"ID":"2e6d63aa-e36a-4ef7-b50d-255b44e72c20","Type":"ContainerDied","Data":"d43689494700a5ceee020f09f4ca2f2fd150c60e2e3c7e0d6cd30145e29f275e"} Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.127953 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.128304 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.332446 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.448600 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5hxl\" (UniqueName: \"kubernetes.io/projected/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-kube-api-access-q5hxl\") pod \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.449157 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-config-data\") pod \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.449270 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-combined-ca-bundle\") pod \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.449420 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-scripts\") pod \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\" (UID: \"2e6d63aa-e36a-4ef7-b50d-255b44e72c20\") " Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.468494 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-scripts" (OuterVolumeSpecName: "scripts") pod "2e6d63aa-e36a-4ef7-b50d-255b44e72c20" (UID: "2e6d63aa-e36a-4ef7-b50d-255b44e72c20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.470587 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-kube-api-access-q5hxl" (OuterVolumeSpecName: "kube-api-access-q5hxl") pod "2e6d63aa-e36a-4ef7-b50d-255b44e72c20" (UID: "2e6d63aa-e36a-4ef7-b50d-255b44e72c20"). InnerVolumeSpecName "kube-api-access-q5hxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.479621 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e6d63aa-e36a-4ef7-b50d-255b44e72c20" (UID: "2e6d63aa-e36a-4ef7-b50d-255b44e72c20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.482552 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-config-data" (OuterVolumeSpecName: "config-data") pod "2e6d63aa-e36a-4ef7-b50d-255b44e72c20" (UID: "2e6d63aa-e36a-4ef7-b50d-255b44e72c20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.551252 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.551280 4676 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.551290 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5hxl\" (UniqueName: \"kubernetes.io/projected/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-kube-api-access-q5hxl\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.551300 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d63aa-e36a-4ef7-b50d-255b44e72c20-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.916962 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-49926" event={"ID":"2e6d63aa-e36a-4ef7-b50d-255b44e72c20","Type":"ContainerDied","Data":"11b0104944bb8c5aff79330efedf762b41dcc22f00267801cc45a32cf1468cf9"} Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.917006 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b0104944bb8c5aff79330efedf762b41dcc22f00267801cc45a32cf1468cf9" Jan 24 00:25:14 crc kubenswrapper[4676]: I0124 00:25:14.917069 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-49926" Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.034012 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.035499 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.046631 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.109776 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.112219 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-api" containerID="cri-o://01fc81c9856cfd2ef13df58946b52afdb5cba674fdb4423a36d9a11f6c166109" gracePeriod=30 Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.112238 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-log" containerID="cri-o://71c7f3216e344862d8255e5daeef618a41a9f2fd2e7332ca45fe189abdb2cd5c" gracePeriod=30 Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.132428 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.132443 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.148766 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.149090 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="32d678d8-bc5d-4993-afc3-c6700d53b00e" containerName="nova-scheduler-scheduler" containerID="cri-o://9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" gracePeriod=30 Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.168127 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:25:15 crc kubenswrapper[4676]: E0124 00:25:15.649791 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:25:15 crc kubenswrapper[4676]: E0124 00:25:15.652273 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:25:15 crc kubenswrapper[4676]: E0124 00:25:15.653526 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:25:15 crc kubenswrapper[4676]: E0124 00:25:15.653598 4676 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="32d678d8-bc5d-4993-afc3-c6700d53b00e" containerName="nova-scheduler-scheduler" Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.928028 4676 generic.go:334] "Generic (PLEG): container finished" podID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerID="71c7f3216e344862d8255e5daeef618a41a9f2fd2e7332ca45fe189abdb2cd5c" exitCode=143 Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.928461 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace","Type":"ContainerDied","Data":"71c7f3216e344862d8255e5daeef618a41a9f2fd2e7332ca45fe189abdb2cd5c"} Jan 24 00:25:15 crc kubenswrapper[4676]: I0124 00:25:15.940720 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 00:25:16 crc kubenswrapper[4676]: I0124 00:25:16.936682 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-log" containerID="cri-o://dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1" gracePeriod=30 Jan 24 00:25:16 crc kubenswrapper[4676]: I0124 00:25:16.937114 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-metadata" containerID="cri-o://069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b" gracePeriod=30 Jan 24 00:25:17 crc kubenswrapper[4676]: I0124 00:25:17.950251 4676 generic.go:334] "Generic (PLEG): container finished" podID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerID="dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1" exitCode=143 Jan 24 00:25:17 crc kubenswrapper[4676]: I0124 00:25:17.950313 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6784012-a1a0-4647-929f-50788a8dc5bb","Type":"ContainerDied","Data":"dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1"} Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.092671 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:41200->10.217.0.195:8775: read: connection reset by peer" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.092746 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:41204->10.217.0.195:8775: read: connection reset by peer" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.554734 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:25:20 crc kubenswrapper[4676]: E0124 00:25:20.650002 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142 is running failed: container process not found" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:25:20 crc kubenswrapper[4676]: E0124 00:25:20.651231 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142 is running failed: container process not found" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:25:20 crc kubenswrapper[4676]: E0124 00:25:20.651474 4676 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142 is running failed: container process not found" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 24 00:25:20 crc kubenswrapper[4676]: E0124 00:25:20.651498 4676 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="32d678d8-bc5d-4993-afc3-c6700d53b00e" containerName="nova-scheduler-scheduler" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.660775 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.716211 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-combined-ca-bundle\") pod \"d6784012-a1a0-4647-929f-50788a8dc5bb\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.716338 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-config-data\") pod \"d6784012-a1a0-4647-929f-50788a8dc5bb\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.716365 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-nova-metadata-tls-certs\") pod \"d6784012-a1a0-4647-929f-50788a8dc5bb\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.717802 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4dfw\" (UniqueName: \"kubernetes.io/projected/d6784012-a1a0-4647-929f-50788a8dc5bb-kube-api-access-p4dfw\") pod \"d6784012-a1a0-4647-929f-50788a8dc5bb\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.717836 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6784012-a1a0-4647-929f-50788a8dc5bb-logs\") pod \"d6784012-a1a0-4647-929f-50788a8dc5bb\" (UID: \"d6784012-a1a0-4647-929f-50788a8dc5bb\") " Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.727870 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6784012-a1a0-4647-929f-50788a8dc5bb-logs" (OuterVolumeSpecName: "logs") pod "d6784012-a1a0-4647-929f-50788a8dc5bb" (UID: "d6784012-a1a0-4647-929f-50788a8dc5bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.753183 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-config-data" (OuterVolumeSpecName: "config-data") pod "d6784012-a1a0-4647-929f-50788a8dc5bb" (UID: "d6784012-a1a0-4647-929f-50788a8dc5bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.761223 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6784012-a1a0-4647-929f-50788a8dc5bb-kube-api-access-p4dfw" (OuterVolumeSpecName: "kube-api-access-p4dfw") pod "d6784012-a1a0-4647-929f-50788a8dc5bb" (UID: "d6784012-a1a0-4647-929f-50788a8dc5bb"). InnerVolumeSpecName "kube-api-access-p4dfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.804488 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6784012-a1a0-4647-929f-50788a8dc5bb" (UID: "d6784012-a1a0-4647-929f-50788a8dc5bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.806530 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d6784012-a1a0-4647-929f-50788a8dc5bb" (UID: "d6784012-a1a0-4647-929f-50788a8dc5bb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.822516 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vr9g\" (UniqueName: \"kubernetes.io/projected/32d678d8-bc5d-4993-afc3-c6700d53b00e-kube-api-access-5vr9g\") pod \"32d678d8-bc5d-4993-afc3-c6700d53b00e\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.822576 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-combined-ca-bundle\") pod \"32d678d8-bc5d-4993-afc3-c6700d53b00e\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.822670 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-config-data\") pod \"32d678d8-bc5d-4993-afc3-c6700d53b00e\" (UID: \"32d678d8-bc5d-4993-afc3-c6700d53b00e\") " Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.823018 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.823030 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.823042 4676 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6784012-a1a0-4647-929f-50788a8dc5bb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.823052 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4dfw\" (UniqueName: \"kubernetes.io/projected/d6784012-a1a0-4647-929f-50788a8dc5bb-kube-api-access-p4dfw\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.823080 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6784012-a1a0-4647-929f-50788a8dc5bb-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.842154 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d678d8-bc5d-4993-afc3-c6700d53b00e-kube-api-access-5vr9g" (OuterVolumeSpecName: "kube-api-access-5vr9g") pod "32d678d8-bc5d-4993-afc3-c6700d53b00e" (UID: "32d678d8-bc5d-4993-afc3-c6700d53b00e"). InnerVolumeSpecName "kube-api-access-5vr9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.862733 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32d678d8-bc5d-4993-afc3-c6700d53b00e" (UID: "32d678d8-bc5d-4993-afc3-c6700d53b00e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.862863 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-config-data" (OuterVolumeSpecName: "config-data") pod "32d678d8-bc5d-4993-afc3-c6700d53b00e" (UID: "32d678d8-bc5d-4993-afc3-c6700d53b00e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.924774 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vr9g\" (UniqueName: \"kubernetes.io/projected/32d678d8-bc5d-4993-afc3-c6700d53b00e-kube-api-access-5vr9g\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.924801 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.924810 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32d678d8-bc5d-4993-afc3-c6700d53b00e-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.992221 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6784012-a1a0-4647-929f-50788a8dc5bb","Type":"ContainerDied","Data":"069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b"} Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.992251 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.992264 4676 scope.go:117] "RemoveContainer" containerID="069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b" Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.992169 4676 generic.go:334] "Generic (PLEG): container finished" podID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerID="069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b" exitCode=0 Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.992715 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6784012-a1a0-4647-929f-50788a8dc5bb","Type":"ContainerDied","Data":"01fe7e14f549bff2abe35a4d5403466f7bbd36dfe57daa6ef4e421db5dcd7cf7"} Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.997367 4676 generic.go:334] "Generic (PLEG): container finished" podID="32d678d8-bc5d-4993-afc3-c6700d53b00e" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" exitCode=0 Jan 24 00:25:20 crc kubenswrapper[4676]: I0124 00:25:20.997471 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.001715 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"32d678d8-bc5d-4993-afc3-c6700d53b00e","Type":"ContainerDied","Data":"9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142"} Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.001768 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"32d678d8-bc5d-4993-afc3-c6700d53b00e","Type":"ContainerDied","Data":"d01bc4c049fe259a880c270ae5608bf35ba2c89a0a3367424a3555670349c800"} Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.006038 4676 generic.go:334] "Generic (PLEG): container finished" podID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerID="01fc81c9856cfd2ef13df58946b52afdb5cba674fdb4423a36d9a11f6c166109" exitCode=0 Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.006099 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace","Type":"ContainerDied","Data":"01fc81c9856cfd2ef13df58946b52afdb5cba674fdb4423a36d9a11f6c166109"} Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.044831 4676 scope.go:117] "RemoveContainer" containerID="dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.054394 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.068869 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.086669 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.091075 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100044 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.100370 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6d63aa-e36a-4ef7-b50d-255b44e72c20" containerName="nova-manage" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100396 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6d63aa-e36a-4ef7-b50d-255b44e72c20" containerName="nova-manage" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.100410 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d678d8-bc5d-4993-afc3-c6700d53b00e" containerName="nova-scheduler-scheduler" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100416 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d678d8-bc5d-4993-afc3-c6700d53b00e" containerName="nova-scheduler-scheduler" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.100441 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" containerName="dnsmasq-dns" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100448 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" containerName="dnsmasq-dns" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.100458 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-log" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100463 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-log" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.100478 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-metadata" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100483 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-metadata" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.100491 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" containerName="init" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100497 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" containerName="init" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100663 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c07196-aa1d-4d14-bc9f-6aec4de13853" containerName="dnsmasq-dns" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100673 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d678d8-bc5d-4993-afc3-c6700d53b00e" containerName="nova-scheduler-scheduler" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100683 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-log" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100692 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6d63aa-e36a-4ef7-b50d-255b44e72c20" containerName="nova-manage" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.100701 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" containerName="nova-metadata-metadata" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.101624 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.102319 4676 scope.go:117] "RemoveContainer" containerID="069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.103095 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b\": container with ID starting with 069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b not found: ID does not exist" containerID="069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.103196 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b"} err="failed to get container status \"069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b\": rpc error: code = NotFound desc = could not find container \"069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b\": container with ID starting with 069d417103a5b3d1a63ca0a83a7bce5e55fb783cc6b5b9635b78d676f26ab99b not found: ID does not exist" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.103272 4676 scope.go:117] "RemoveContainer" containerID="dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.104273 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1\": container with ID starting with dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1 not found: ID does not exist" containerID="dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.104314 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1"} err="failed to get container status \"dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1\": rpc error: code = NotFound desc = could not find container \"dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1\": container with ID starting with dc7b56efe46aa7c7308ac62b0923d3c3d5f8466d0d2cfaa919b9e8096c2effb1 not found: ID does not exist" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.104346 4676 scope.go:117] "RemoveContainer" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.105215 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.107040 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.110014 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.117707 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.118326 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-log" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.118429 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-log" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.118532 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-api" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.118596 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-api" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.118828 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-api" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.118913 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" containerName="nova-api-log" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.120258 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.125975 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.137071 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.163767 4676 scope.go:117] "RemoveContainer" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" Jan 24 00:25:21 crc kubenswrapper[4676]: E0124 00:25:21.164232 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142\": container with ID starting with 9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142 not found: ID does not exist" containerID="9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.164275 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142"} err="failed to get container status \"9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142\": rpc error: code = NotFound desc = could not find container \"9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142\": container with ID starting with 9ba31632fec3a842045888270699fb415b184651ae3d470415114c55c80c8142 not found: ID does not exist" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.185204 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.229251 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-internal-tls-certs\") pod \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.229426 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-config-data\") pod \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.229495 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-logs\") pod \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.229539 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdbxs\" (UniqueName: \"kubernetes.io/projected/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-kube-api-access-cdbxs\") pod \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.229616 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-public-tls-certs\") pod \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.229738 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-combined-ca-bundle\") pod \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\" (UID: \"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace\") " Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.230237 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1085d48-78f1-4437-a518-239ba90b7c0b-logs\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.230303 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.230414 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-logs" (OuterVolumeSpecName: "logs") pod "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" (UID: "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.230620 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcmvm\" (UniqueName: \"kubernetes.io/projected/8ff6f87b-a571-43da-9fbc-9203f0001771-kube-api-access-wcmvm\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.230710 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff6f87b-a571-43da-9fbc-9203f0001771-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.230780 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-config-data\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.230819 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.230889 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mslmt\" (UniqueName: \"kubernetes.io/projected/f1085d48-78f1-4437-a518-239ba90b7c0b-kube-api-access-mslmt\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.231152 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff6f87b-a571-43da-9fbc-9203f0001771-config-data\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.231800 4676 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-logs\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.236784 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-kube-api-access-cdbxs" (OuterVolumeSpecName: "kube-api-access-cdbxs") pod "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" (UID: "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace"). InnerVolumeSpecName "kube-api-access-cdbxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.264968 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" (UID: "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.280547 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-config-data" (OuterVolumeSpecName: "config-data") pod "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" (UID: "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.287903 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" (UID: "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.290491 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" (UID: "8df0611d-44c4-4791-b1ee-ebfd5e9c4ace"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.333764 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff6f87b-a571-43da-9fbc-9203f0001771-config-data\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.333827 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1085d48-78f1-4437-a518-239ba90b7c0b-logs\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.333923 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334138 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcmvm\" (UniqueName: \"kubernetes.io/projected/8ff6f87b-a571-43da-9fbc-9203f0001771-kube-api-access-wcmvm\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334230 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff6f87b-a571-43da-9fbc-9203f0001771-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334293 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-config-data\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334327 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334391 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1085d48-78f1-4437-a518-239ba90b7c0b-logs\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334420 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mslmt\" (UniqueName: \"kubernetes.io/projected/f1085d48-78f1-4437-a518-239ba90b7c0b-kube-api-access-mslmt\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334563 4676 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334580 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334593 4676 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334605 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.334618 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdbxs\" (UniqueName: \"kubernetes.io/projected/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace-kube-api-access-cdbxs\") on node \"crc\" DevicePath \"\"" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.338555 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff6f87b-a571-43da-9fbc-9203f0001771-config-data\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.340026 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-config-data\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.340345 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.346516 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff6f87b-a571-43da-9fbc-9203f0001771-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.351788 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1085d48-78f1-4437-a518-239ba90b7c0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.353904 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcmvm\" (UniqueName: \"kubernetes.io/projected/8ff6f87b-a571-43da-9fbc-9203f0001771-kube-api-access-wcmvm\") pod \"nova-scheduler-0\" (UID: \"8ff6f87b-a571-43da-9fbc-9203f0001771\") " pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.357812 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mslmt\" (UniqueName: \"kubernetes.io/projected/f1085d48-78f1-4437-a518-239ba90b7c0b-kube-api-access-mslmt\") pod \"nova-metadata-0\" (UID: \"f1085d48-78f1-4437-a518-239ba90b7c0b\") " pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.437809 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.511600 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 00:25:21 crc kubenswrapper[4676]: W0124 00:25:21.944587 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1085d48_78f1_4437_a518_239ba90b7c0b.slice/crio-ca61a03b74a39bf03f1ded2a001e80bb2ece678117144da0b00b139baa25cdea WatchSource:0}: Error finding container ca61a03b74a39bf03f1ded2a001e80bb2ece678117144da0b00b139baa25cdea: Status 404 returned error can't find the container with id ca61a03b74a39bf03f1ded2a001e80bb2ece678117144da0b00b139baa25cdea Jan 24 00:25:21 crc kubenswrapper[4676]: I0124 00:25:21.950870 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.014019 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.040794 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8df0611d-44c4-4791-b1ee-ebfd5e9c4ace","Type":"ContainerDied","Data":"21b2d30f9d7e2c4a549d8c87dd63e0002ace80a317cb0d0205c94126fe98f6c1"} Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.040856 4676 scope.go:117] "RemoveContainer" containerID="01fc81c9856cfd2ef13df58946b52afdb5cba674fdb4423a36d9a11f6c166109" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.041068 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.043171 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1085d48-78f1-4437-a518-239ba90b7c0b","Type":"ContainerStarted","Data":"ca61a03b74a39bf03f1ded2a001e80bb2ece678117144da0b00b139baa25cdea"} Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.089539 4676 scope.go:117] "RemoveContainer" containerID="71c7f3216e344862d8255e5daeef618a41a9f2fd2e7332ca45fe189abdb2cd5c" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.109850 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.122271 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.135103 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.137056 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.140228 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.145487 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.146635 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.146964 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.249485 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.249549 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.249650 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff542eb3-0142-4d46-a6e6-2e89c73f5824-logs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.249720 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-config-data\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.249767 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.249791 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzr5\" (UniqueName: \"kubernetes.io/projected/ff542eb3-0142-4d46-a6e6-2e89c73f5824-kube-api-access-jfzr5\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.267812 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d678d8-bc5d-4993-afc3-c6700d53b00e" path="/var/lib/kubelet/pods/32d678d8-bc5d-4993-afc3-c6700d53b00e/volumes" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.268495 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df0611d-44c4-4791-b1ee-ebfd5e9c4ace" path="/var/lib/kubelet/pods/8df0611d-44c4-4791-b1ee-ebfd5e9c4ace/volumes" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.269204 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6784012-a1a0-4647-929f-50788a8dc5bb" path="/var/lib/kubelet/pods/d6784012-a1a0-4647-929f-50788a8dc5bb/volumes" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.351891 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff542eb3-0142-4d46-a6e6-2e89c73f5824-logs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.351962 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-config-data\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.352010 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.352034 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfzr5\" (UniqueName: \"kubernetes.io/projected/ff542eb3-0142-4d46-a6e6-2e89c73f5824-kube-api-access-jfzr5\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.352459 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff542eb3-0142-4d46-a6e6-2e89c73f5824-logs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.352664 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.353031 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.357349 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.358598 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-config-data\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.359419 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.366722 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff542eb3-0142-4d46-a6e6-2e89c73f5824-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.372094 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfzr5\" (UniqueName: \"kubernetes.io/projected/ff542eb3-0142-4d46-a6e6-2e89c73f5824-kube-api-access-jfzr5\") pod \"nova-api-0\" (UID: \"ff542eb3-0142-4d46-a6e6-2e89c73f5824\") " pod="openstack/nova-api-0" Jan 24 00:25:22 crc kubenswrapper[4676]: I0124 00:25:22.627341 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 00:25:23 crc kubenswrapper[4676]: I0124 00:25:23.053113 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 00:25:23 crc kubenswrapper[4676]: I0124 00:25:23.064669 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1085d48-78f1-4437-a518-239ba90b7c0b","Type":"ContainerStarted","Data":"6bf98a0044126e70d8d0724db2f9a2c322af2b1541eaab3a04a5b65ffa344c6e"} Jan 24 00:25:23 crc kubenswrapper[4676]: I0124 00:25:23.064714 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1085d48-78f1-4437-a518-239ba90b7c0b","Type":"ContainerStarted","Data":"36247125dcfa3ca138a807b7c7c3bcb539eae76770c74e144f4f740836a033a2"} Jan 24 00:25:23 crc kubenswrapper[4676]: I0124 00:25:23.067798 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ff6f87b-a571-43da-9fbc-9203f0001771","Type":"ContainerStarted","Data":"188dbba19d2bc9c27dd3a1c50427949a44481b8108236719f86cf0b3b1b7e55a"} Jan 24 00:25:23 crc kubenswrapper[4676]: I0124 00:25:23.067869 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8ff6f87b-a571-43da-9fbc-9203f0001771","Type":"ContainerStarted","Data":"9bc49e578ceca7e8e8b18c2ec8945fca3853f9ac77ae5798e51c50d5feb201dd"} Jan 24 00:25:23 crc kubenswrapper[4676]: I0124 00:25:23.092620 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.092602647 podStartE2EDuration="2.092602647s" podCreationTimestamp="2026-01-24 00:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:23.088737779 +0000 UTC m=+1307.118708820" watchObservedRunningTime="2026-01-24 00:25:23.092602647 +0000 UTC m=+1307.122573638" Jan 24 00:25:24 crc kubenswrapper[4676]: I0124 00:25:24.081357 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff542eb3-0142-4d46-a6e6-2e89c73f5824","Type":"ContainerStarted","Data":"98163d4d693f95146309125c2107c6529f89459431fd7e1fd9849d47d191bf29"} Jan 24 00:25:24 crc kubenswrapper[4676]: I0124 00:25:24.082860 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff542eb3-0142-4d46-a6e6-2e89c73f5824","Type":"ContainerStarted","Data":"640bb5433c9f92bd6425cafd54e570442e04f0073ec1546df3ee542a99930f01"} Jan 24 00:25:24 crc kubenswrapper[4676]: I0124 00:25:24.083005 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff542eb3-0142-4d46-a6e6-2e89c73f5824","Type":"ContainerStarted","Data":"bbe0d2c9e735026a39036967c182b36475355ec72fe3dd6eb21bf174887ad96f"} Jan 24 00:25:24 crc kubenswrapper[4676]: I0124 00:25:24.103033 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.103013582 podStartE2EDuration="3.103013582s" podCreationTimestamp="2026-01-24 00:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:23.114748812 +0000 UTC m=+1307.144719823" watchObservedRunningTime="2026-01-24 00:25:24.103013582 +0000 UTC m=+1308.132984573" Jan 24 00:25:24 crc kubenswrapper[4676]: I0124 00:25:24.107643 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.107634812 podStartE2EDuration="2.107634812s" podCreationTimestamp="2026-01-24 00:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:25:24.096645968 +0000 UTC m=+1308.126617009" watchObservedRunningTime="2026-01-24 00:25:24.107634812 +0000 UTC m=+1308.137605813" Jan 24 00:25:26 crc kubenswrapper[4676]: I0124 00:25:26.438127 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 00:25:26 crc kubenswrapper[4676]: I0124 00:25:26.438475 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 00:25:26 crc kubenswrapper[4676]: I0124 00:25:26.512424 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 00:25:31 crc kubenswrapper[4676]: I0124 00:25:31.438455 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 00:25:31 crc kubenswrapper[4676]: I0124 00:25:31.439437 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 00:25:31 crc kubenswrapper[4676]: I0124 00:25:31.512595 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 00:25:31 crc kubenswrapper[4676]: I0124 00:25:31.556658 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 00:25:32 crc kubenswrapper[4676]: I0124 00:25:32.208880 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 00:25:32 crc kubenswrapper[4676]: I0124 00:25:32.450521 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f1085d48-78f1-4437-a518-239ba90b7c0b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:25:32 crc kubenswrapper[4676]: I0124 00:25:32.450525 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f1085d48-78f1-4437-a518-239ba90b7c0b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:25:32 crc kubenswrapper[4676]: I0124 00:25:32.628978 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 00:25:32 crc kubenswrapper[4676]: I0124 00:25:32.629294 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 00:25:33 crc kubenswrapper[4676]: I0124 00:25:33.643531 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff542eb3-0142-4d46-a6e6-2e89c73f5824" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:25:33 crc kubenswrapper[4676]: I0124 00:25:33.643599 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff542eb3-0142-4d46-a6e6-2e89c73f5824" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:25:34 crc kubenswrapper[4676]: I0124 00:25:34.226055 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 00:25:39 crc kubenswrapper[4676]: I0124 00:25:39.363749 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:25:39 crc kubenswrapper[4676]: I0124 00:25:39.364229 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:25:41 crc kubenswrapper[4676]: I0124 00:25:41.445832 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 00:25:41 crc kubenswrapper[4676]: I0124 00:25:41.447755 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 00:25:41 crc kubenswrapper[4676]: I0124 00:25:41.452469 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 00:25:42 crc kubenswrapper[4676]: I0124 00:25:42.277849 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 00:25:42 crc kubenswrapper[4676]: I0124 00:25:42.647134 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 00:25:42 crc kubenswrapper[4676]: I0124 00:25:42.647764 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 00:25:42 crc kubenswrapper[4676]: I0124 00:25:42.650937 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 00:25:42 crc kubenswrapper[4676]: I0124 00:25:42.668969 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 00:25:43 crc kubenswrapper[4676]: I0124 00:25:43.279431 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 00:25:43 crc kubenswrapper[4676]: I0124 00:25:43.290342 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 00:25:51 crc kubenswrapper[4676]: I0124 00:25:51.153106 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:25:51 crc kubenswrapper[4676]: I0124 00:25:51.994020 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:25:55 crc kubenswrapper[4676]: I0124 00:25:55.732973 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerName="rabbitmq" containerID="cri-o://7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385" gracePeriod=604796 Jan 24 00:25:55 crc kubenswrapper[4676]: I0124 00:25:55.802458 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 24 00:25:56 crc kubenswrapper[4676]: I0124 00:25:56.611838 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" containerName="rabbitmq" containerID="cri-o://028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9" gracePeriod=604796 Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.368369 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.439307 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68d6466c-a6ff-40ba-952d-007b14efdfd3-erlang-cookie-secret\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.439739 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68d6466c-a6ff-40ba-952d-007b14efdfd3-pod-info\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.439860 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-plugins\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.440019 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.440252 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-tls\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.440447 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-confd\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.440562 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.440763 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-config-data\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.440925 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-erlang-cookie\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.441076 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-server-conf\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.441225 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v959\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-kube-api-access-7v959\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.441325 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-plugins-conf\") pod \"68d6466c-a6ff-40ba-952d-007b14efdfd3\" (UID: \"68d6466c-a6ff-40ba-952d-007b14efdfd3\") " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.441968 4676 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.442585 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.442605 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.448470 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.448675 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/68d6466c-a6ff-40ba-952d-007b14efdfd3-pod-info" (OuterVolumeSpecName: "pod-info") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.481800 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-kube-api-access-7v959" (OuterVolumeSpecName: "kube-api-access-7v959") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "kube-api-access-7v959". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.482581 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.485590 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d6466c-a6ff-40ba-952d-007b14efdfd3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.523906 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-config-data" (OuterVolumeSpecName: "config-data") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.540952 4676 generic.go:334] "Generic (PLEG): container finished" podID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerID="7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385" exitCode=0 Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.540991 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68d6466c-a6ff-40ba-952d-007b14efdfd3","Type":"ContainerDied","Data":"7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385"} Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.541016 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68d6466c-a6ff-40ba-952d-007b14efdfd3","Type":"ContainerDied","Data":"32030fde9c3a43091d81748f9987539fcf3f4b60bda9f5dc9e85b5f47719d7eb"} Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.541033 4676 scope.go:117] "RemoveContainer" containerID="7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.541165 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.546020 4676 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.546051 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.546063 4676 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.546074 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v959\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-kube-api-access-7v959\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.546086 4676 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.546095 4676 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68d6466c-a6ff-40ba-952d-007b14efdfd3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.546104 4676 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68d6466c-a6ff-40ba-952d-007b14efdfd3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.546130 4676 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.582359 4676 scope.go:117] "RemoveContainer" containerID="0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.584864 4676 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.615539 4676 scope.go:117] "RemoveContainer" containerID="7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385" Jan 24 00:26:02 crc kubenswrapper[4676]: E0124 00:26:02.615959 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385\": container with ID starting with 7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385 not found: ID does not exist" containerID="7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.615986 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385"} err="failed to get container status \"7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385\": rpc error: code = NotFound desc = could not find container \"7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385\": container with ID starting with 7ab3a50e872eb2d62ccafb5072d9093f176eca7aee4a3935c8627a779d608385 not found: ID does not exist" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.616006 4676 scope.go:117] "RemoveContainer" containerID="0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723" Jan 24 00:26:02 crc kubenswrapper[4676]: E0124 00:26:02.616289 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723\": container with ID starting with 0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723 not found: ID does not exist" containerID="0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.616307 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723"} err="failed to get container status \"0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723\": rpc error: code = NotFound desc = could not find container \"0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723\": container with ID starting with 0db0998b0243f3673ede9e2b18bccbbf8c17216722eeabea36670a8396448723 not found: ID does not exist" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.625489 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-server-conf" (OuterVolumeSpecName: "server-conf") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.647470 4676 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.647501 4676 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68d6466c-a6ff-40ba-952d-007b14efdfd3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.696783 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "68d6466c-a6ff-40ba-952d-007b14efdfd3" (UID: "68d6466c-a6ff-40ba-952d-007b14efdfd3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.749008 4676 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68d6466c-a6ff-40ba-952d-007b14efdfd3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.887276 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.901838 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.913919 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:26:02 crc kubenswrapper[4676]: E0124 00:26:02.914680 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerName="rabbitmq" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.914697 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerName="rabbitmq" Jan 24 00:26:02 crc kubenswrapper[4676]: E0124 00:26:02.914717 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerName="setup-container" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.914723 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerName="setup-container" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.914902 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" containerName="rabbitmq" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.915785 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.917174 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.918048 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nxmk6" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.918966 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.918968 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.919010 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.919043 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.919318 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 24 00:26:02 crc kubenswrapper[4676]: I0124 00:26:02.925752 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060260 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060639 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060675 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060716 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjv27\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-kube-api-access-vjv27\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060742 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060816 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-config-data\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060848 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060885 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c162e478-58e3-4a83-97cb-29887613c1aa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060905 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060922 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c162e478-58e3-4a83-97cb-29887613c1aa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.060949 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.162736 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c162e478-58e3-4a83-97cb-29887613c1aa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.163156 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.163274 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c162e478-58e3-4a83-97cb-29887613c1aa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.163436 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.163572 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.163783 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.163944 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.164104 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjv27\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-kube-api-access-vjv27\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.164246 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.164487 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-config-data\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.164629 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.164861 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.165955 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.166127 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.166155 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.166666 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-config-data\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.166911 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c162e478-58e3-4a83-97cb-29887613c1aa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.171074 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.172091 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c162e478-58e3-4a83-97cb-29887613c1aa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.172773 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c162e478-58e3-4a83-97cb-29887613c1aa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.179165 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.187691 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjv27\" (UniqueName: \"kubernetes.io/projected/c162e478-58e3-4a83-97cb-29887613c1aa-kube-api-access-vjv27\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.212479 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"c162e478-58e3-4a83-97cb-29887613c1aa\") " pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.233816 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.289030 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.471916 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-plugins-conf\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.471951 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.471994 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-plugins\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.472084 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36558de2-6aac-43e9-832d-2f96c46e8152-erlang-cookie-secret\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.472105 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdctt\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-kube-api-access-sdctt\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.472313 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-config-data\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.472808 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-confd\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.473242 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36558de2-6aac-43e9-832d-2f96c46e8152-pod-info\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.473260 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-server-conf\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.473369 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-erlang-cookie\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.473401 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-tls\") pod \"36558de2-6aac-43e9-832d-2f96c46e8152\" (UID: \"36558de2-6aac-43e9-832d-2f96c46e8152\") " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.473503 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.473881 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.474659 4676 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.474680 4676 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.474887 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.476883 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.483393 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/36558de2-6aac-43e9-832d-2f96c46e8152-pod-info" (OuterVolumeSpecName: "pod-info") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.483662 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36558de2-6aac-43e9-832d-2f96c46e8152-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.483714 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-kube-api-access-sdctt" (OuterVolumeSpecName: "kube-api-access-sdctt") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "kube-api-access-sdctt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.484525 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.510335 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-config-data" (OuterVolumeSpecName: "config-data") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.557090 4676 generic.go:334] "Generic (PLEG): container finished" podID="36558de2-6aac-43e9-832d-2f96c46e8152" containerID="028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9" exitCode=0 Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.557127 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36558de2-6aac-43e9-832d-2f96c46e8152","Type":"ContainerDied","Data":"028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9"} Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.557150 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36558de2-6aac-43e9-832d-2f96c46e8152","Type":"ContainerDied","Data":"a90a5dbbd754f4d91197a2d8faa756ac77a0f539415213117ebe9918b73147bc"} Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.557167 4676 scope.go:117] "RemoveContainer" containerID="028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.557280 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.564457 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-server-conf" (OuterVolumeSpecName: "server-conf") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.601310 4676 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36558de2-6aac-43e9-832d-2f96c46e8152-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.601538 4676 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-server-conf\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.601601 4676 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.601663 4676 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.601765 4676 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.601851 4676 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36558de2-6aac-43e9-832d-2f96c46e8152-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.601916 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdctt\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-kube-api-access-sdctt\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.601994 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36558de2-6aac-43e9-832d-2f96c46e8152-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.602603 4676 scope.go:117] "RemoveContainer" containerID="9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.624206 4676 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.631954 4676 scope.go:117] "RemoveContainer" containerID="028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9" Jan 24 00:26:03 crc kubenswrapper[4676]: E0124 00:26:03.632777 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9\": container with ID starting with 028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9 not found: ID does not exist" containerID="028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.632818 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9"} err="failed to get container status \"028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9\": rpc error: code = NotFound desc = could not find container \"028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9\": container with ID starting with 028693ff31c2a35e401cae916e148bd75fd1aa0c3135036bd6feedcc4c32fed9 not found: ID does not exist" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.632846 4676 scope.go:117] "RemoveContainer" containerID="9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960" Jan 24 00:26:03 crc kubenswrapper[4676]: E0124 00:26:03.633345 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960\": container with ID starting with 9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960 not found: ID does not exist" containerID="9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.633534 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960"} err="failed to get container status \"9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960\": rpc error: code = NotFound desc = could not find container \"9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960\": container with ID starting with 9d758f8c9e92f01e23b0692665a48971ec4eb0595df60b78749bc71450ba8960 not found: ID does not exist" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.656646 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "36558de2-6aac-43e9-832d-2f96c46e8152" (UID: "36558de2-6aac-43e9-832d-2f96c46e8152"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.703873 4676 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.703909 4676 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36558de2-6aac-43e9-832d-2f96c46e8152-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.810031 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.908447 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.921513 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.939352 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:26:03 crc kubenswrapper[4676]: E0124 00:26:03.939766 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" containerName="setup-container" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.939782 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" containerName="setup-container" Jan 24 00:26:03 crc kubenswrapper[4676]: E0124 00:26:03.939800 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" containerName="rabbitmq" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.939807 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" containerName="rabbitmq" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.940026 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" containerName="rabbitmq" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.941188 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.952703 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.952875 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.952973 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.953080 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7x4sc" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.953183 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.953306 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.953484 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 24 00:26:03 crc kubenswrapper[4676]: I0124 00:26:03.986542 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.116900 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.116996 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2df1d42-fa93-4771-ba77-1c27f820b298-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117036 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117089 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117159 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7d56\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-kube-api-access-p7d56\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117202 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117269 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117300 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2df1d42-fa93-4771-ba77-1c27f820b298-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117330 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117366 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.117428 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218494 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2df1d42-fa93-4771-ba77-1c27f820b298-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218543 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218576 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218614 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7d56\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-kube-api-access-p7d56\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218642 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218679 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218698 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2df1d42-fa93-4771-ba77-1c27f820b298-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218717 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218743 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218770 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.218791 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.219887 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.219979 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.220252 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.220361 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.220426 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.221427 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2df1d42-fa93-4771-ba77-1c27f820b298-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.224058 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2df1d42-fa93-4771-ba77-1c27f820b298-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.226953 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.234830 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.235442 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2df1d42-fa93-4771-ba77-1c27f820b298-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.239894 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7d56\" (UniqueName: \"kubernetes.io/projected/a2df1d42-fa93-4771-ba77-1c27f820b298-kube-api-access-p7d56\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.267714 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36558de2-6aac-43e9-832d-2f96c46e8152" path="/var/lib/kubelet/pods/36558de2-6aac-43e9-832d-2f96c46e8152/volumes" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.269422 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d6466c-a6ff-40ba-952d-007b14efdfd3" path="/var/lib/kubelet/pods/68d6466c-a6ff-40ba-952d-007b14efdfd3/volumes" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.271981 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2df1d42-fa93-4771-ba77-1c27f820b298\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.333116 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.569747 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c162e478-58e3-4a83-97cb-29887613c1aa","Type":"ContainerStarted","Data":"a86dae02bbfc444752a4a1ad59c4e336d8ba2bab252f75270770135809a7937e"} Jan 24 00:26:04 crc kubenswrapper[4676]: I0124 00:26:04.839195 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 00:26:04 crc kubenswrapper[4676]: W0124 00:26:04.840717 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2df1d42_fa93_4771_ba77_1c27f820b298.slice/crio-3e63c99875d3b0794d7835468269638558c3933816bb36b01c2e320b766fd256 WatchSource:0}: Error finding container 3e63c99875d3b0794d7835468269638558c3933816bb36b01c2e320b766fd256: Status 404 returned error can't find the container with id 3e63c99875d3b0794d7835468269638558c3933816bb36b01c2e320b766fd256 Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.020306 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-28gwc"] Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.022142 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.024508 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.050891 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-28gwc"] Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.163070 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.163129 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.163178 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-config\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.163203 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.163220 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbrv\" (UniqueName: \"kubernetes.io/projected/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-kube-api-access-fbbrv\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.163288 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.163327 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-svc\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.265279 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.265367 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-svc\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.265474 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.265523 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.265585 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-config\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.265619 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.265643 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbrv\" (UniqueName: \"kubernetes.io/projected/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-kube-api-access-fbbrv\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.266220 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.266250 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-svc\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.266792 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.266833 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.266871 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-config\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.267461 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.283591 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbrv\" (UniqueName: \"kubernetes.io/projected/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-kube-api-access-fbbrv\") pod \"dnsmasq-dns-d558885bc-28gwc\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.350127 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.582336 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2df1d42-fa93-4771-ba77-1c27f820b298","Type":"ContainerStarted","Data":"3e63c99875d3b0794d7835468269638558c3933816bb36b01c2e320b766fd256"} Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.586450 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c162e478-58e3-4a83-97cb-29887613c1aa","Type":"ContainerStarted","Data":"e8dd570456f163b9ffc9b717def36eea3b68755520281c3ad977128488285d1c"} Jan 24 00:26:05 crc kubenswrapper[4676]: I0124 00:26:05.828349 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-28gwc"] Jan 24 00:26:06 crc kubenswrapper[4676]: I0124 00:26:06.595482 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2df1d42-fa93-4771-ba77-1c27f820b298","Type":"ContainerStarted","Data":"4622ad5ec55e893e88aecaf78be207183e4c4ff1e3645b53366aa4945663cc45"} Jan 24 00:26:06 crc kubenswrapper[4676]: I0124 00:26:06.597817 4676 generic.go:334] "Generic (PLEG): container finished" podID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" containerID="53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29" exitCode=0 Jan 24 00:26:06 crc kubenswrapper[4676]: I0124 00:26:06.597878 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-28gwc" event={"ID":"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa","Type":"ContainerDied","Data":"53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29"} Jan 24 00:26:06 crc kubenswrapper[4676]: I0124 00:26:06.598026 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-28gwc" event={"ID":"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa","Type":"ContainerStarted","Data":"47b70e12919d8c5f44b9279ac72d11f87aeeca3d747a40cf17fe0462e549f089"} Jan 24 00:26:07 crc kubenswrapper[4676]: I0124 00:26:07.607451 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-28gwc" event={"ID":"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa","Type":"ContainerStarted","Data":"2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3"} Jan 24 00:26:07 crc kubenswrapper[4676]: I0124 00:26:07.633270 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-28gwc" podStartSLOduration=3.633252618 podStartE2EDuration="3.633252618s" podCreationTimestamp="2026-01-24 00:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:26:07.626544164 +0000 UTC m=+1351.656515165" watchObservedRunningTime="2026-01-24 00:26:07.633252618 +0000 UTC m=+1351.663223619" Jan 24 00:26:08 crc kubenswrapper[4676]: I0124 00:26:08.620452 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:09 crc kubenswrapper[4676]: I0124 00:26:09.365222 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:26:09 crc kubenswrapper[4676]: I0124 00:26:09.365623 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.352918 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.435705 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tkk74"] Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.435929 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" podUID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" containerName="dnsmasq-dns" containerID="cri-o://8189f6b4420a2e8c611c03a3298dcf14f84845a898137968cf5f45e6c0c37685" gracePeriod=10 Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.650466 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b865b64bc-drclt"] Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.652314 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.675315 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b865b64bc-drclt"] Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.694074 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.694113 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-config\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.694158 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-dns-svc\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.694200 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.694223 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqcd7\" (UniqueName: \"kubernetes.io/projected/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-kube-api-access-zqcd7\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.694262 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.694289 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.798183 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-dns-svc\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.798617 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.798669 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqcd7\" (UniqueName: \"kubernetes.io/projected/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-kube-api-access-zqcd7\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.798756 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.798812 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.798890 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.798921 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-config\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.799145 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-dns-svc\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.799473 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.800019 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.800315 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.800993 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.804297 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-config\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.818710 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqcd7\" (UniqueName: \"kubernetes.io/projected/cdf394f7-5d67-4a0f-9644-82fe83a72e2d-kube-api-access-zqcd7\") pod \"dnsmasq-dns-6b865b64bc-drclt\" (UID: \"cdf394f7-5d67-4a0f-9644-82fe83a72e2d\") " pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.964757 4676 generic.go:334] "Generic (PLEG): container finished" podID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" containerID="8189f6b4420a2e8c611c03a3298dcf14f84845a898137968cf5f45e6c0c37685" exitCode=0 Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.964861 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" event={"ID":"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4","Type":"ContainerDied","Data":"8189f6b4420a2e8c611c03a3298dcf14f84845a898137968cf5f45e6c0c37685"} Jan 24 00:26:15 crc kubenswrapper[4676]: I0124 00:26:15.984084 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.081355 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.109958 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5hbp\" (UniqueName: \"kubernetes.io/projected/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-kube-api-access-s5hbp\") pod \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.110016 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-nb\") pod \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.110122 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-sb\") pod \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.110157 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-swift-storage-0\") pod \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.110204 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-config\") pod \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.110257 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-svc\") pod \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\" (UID: \"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4\") " Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.175637 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-kube-api-access-s5hbp" (OuterVolumeSpecName: "kube-api-access-s5hbp") pod "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" (UID: "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4"). InnerVolumeSpecName "kube-api-access-s5hbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.214886 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5hbp\" (UniqueName: \"kubernetes.io/projected/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-kube-api-access-s5hbp\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.345225 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" (UID: "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.346098 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.346274 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" (UID: "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.403698 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" (UID: "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.422147 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" (UID: "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.453614 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.453646 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.453657 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.482752 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-config" (OuterVolumeSpecName: "config") pod "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" (UID: "6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.506676 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b865b64bc-drclt"] Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.555308 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.977741 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" event={"ID":"6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4","Type":"ContainerDied","Data":"85858d9bee0a25807ae1aa70febaf41556800347d26ef61522b473ffb2662a4f"} Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.977767 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-tkk74" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.978278 4676 scope.go:117] "RemoveContainer" containerID="8189f6b4420a2e8c611c03a3298dcf14f84845a898137968cf5f45e6c0c37685" Jan 24 00:26:16 crc kubenswrapper[4676]: I0124 00:26:16.987128 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b865b64bc-drclt" event={"ID":"cdf394f7-5d67-4a0f-9644-82fe83a72e2d","Type":"ContainerStarted","Data":"570112f3c08284a28ce9d4293d6ec2bcf1e433dc8c0865373cc49543d157de29"} Jan 24 00:26:17 crc kubenswrapper[4676]: I0124 00:26:17.019202 4676 scope.go:117] "RemoveContainer" containerID="e9acf51095b7dd6a0ad30ae069a3cb2ce0ad324415a9a8bc556468f57ab1c34d" Jan 24 00:26:17 crc kubenswrapper[4676]: I0124 00:26:17.033859 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tkk74"] Jan 24 00:26:17 crc kubenswrapper[4676]: I0124 00:26:17.048633 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tkk74"] Jan 24 00:26:17 crc kubenswrapper[4676]: I0124 00:26:17.997093 4676 generic.go:334] "Generic (PLEG): container finished" podID="cdf394f7-5d67-4a0f-9644-82fe83a72e2d" containerID="195ec07999521e8d01c9bfdf5c88e4c63e302e89c9179fb6fbb32b00261a1c6d" exitCode=0 Jan 24 00:26:17 crc kubenswrapper[4676]: I0124 00:26:17.997188 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b865b64bc-drclt" event={"ID":"cdf394f7-5d67-4a0f-9644-82fe83a72e2d","Type":"ContainerDied","Data":"195ec07999521e8d01c9bfdf5c88e4c63e302e89c9179fb6fbb32b00261a1c6d"} Jan 24 00:26:18 crc kubenswrapper[4676]: I0124 00:26:18.267443 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" path="/var/lib/kubelet/pods/6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4/volumes" Jan 24 00:26:19 crc kubenswrapper[4676]: I0124 00:26:19.017008 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b865b64bc-drclt" event={"ID":"cdf394f7-5d67-4a0f-9644-82fe83a72e2d","Type":"ContainerStarted","Data":"09fcb71dd66ab87ad3c04aae1ffd1429983a5c4a5e0a4cf9884a9644c317d6ee"} Jan 24 00:26:19 crc kubenswrapper[4676]: I0124 00:26:19.018448 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:19 crc kubenswrapper[4676]: I0124 00:26:19.048387 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b865b64bc-drclt" podStartSLOduration=4.048359236 podStartE2EDuration="4.048359236s" podCreationTimestamp="2026-01-24 00:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:26:19.039383533 +0000 UTC m=+1363.069354544" watchObservedRunningTime="2026-01-24 00:26:19.048359236 +0000 UTC m=+1363.078330257" Jan 24 00:26:25 crc kubenswrapper[4676]: I0124 00:26:25.985709 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b865b64bc-drclt" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.073776 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-28gwc"] Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.079478 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-28gwc" podUID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" containerName="dnsmasq-dns" containerID="cri-o://2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3" gracePeriod=10 Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.531630 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.665528 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-openstack-edpm-ipam\") pod \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.665581 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-config\") pod \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.665625 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-nb\") pod \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.665661 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-svc\") pod \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.665692 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-sb\") pod \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.665726 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbbrv\" (UniqueName: \"kubernetes.io/projected/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-kube-api-access-fbbrv\") pod \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.665742 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-swift-storage-0\") pod \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\" (UID: \"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa\") " Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.679580 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-kube-api-access-fbbrv" (OuterVolumeSpecName: "kube-api-access-fbbrv") pod "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" (UID: "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa"). InnerVolumeSpecName "kube-api-access-fbbrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.724372 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" (UID: "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.730934 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-config" (OuterVolumeSpecName: "config") pod "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" (UID: "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.731713 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" (UID: "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.737775 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" (UID: "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.743626 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" (UID: "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.748075 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" (UID: "cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.767662 4676 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.767693 4676 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.767702 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.767712 4676 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.767720 4676 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.767729 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbbrv\" (UniqueName: \"kubernetes.io/projected/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-kube-api-access-fbbrv\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:26 crc kubenswrapper[4676]: I0124 00:26:26.767738 4676 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.099959 4676 generic.go:334] "Generic (PLEG): container finished" podID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" containerID="2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3" exitCode=0 Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.100018 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-28gwc" event={"ID":"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa","Type":"ContainerDied","Data":"2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3"} Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.100046 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-28gwc" Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.100087 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-28gwc" event={"ID":"cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa","Type":"ContainerDied","Data":"47b70e12919d8c5f44b9279ac72d11f87aeeca3d747a40cf17fe0462e549f089"} Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.100118 4676 scope.go:117] "RemoveContainer" containerID="2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3" Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.136339 4676 scope.go:117] "RemoveContainer" containerID="53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29" Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.156654 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-28gwc"] Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.162681 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-28gwc"] Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.174789 4676 scope.go:117] "RemoveContainer" containerID="2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3" Jan 24 00:26:27 crc kubenswrapper[4676]: E0124 00:26:27.175274 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3\": container with ID starting with 2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3 not found: ID does not exist" containerID="2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3" Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.175309 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3"} err="failed to get container status \"2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3\": rpc error: code = NotFound desc = could not find container \"2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3\": container with ID starting with 2c37d9e66e5004b7c1d2e801bd1fd56b36ccdd85608287fcda0faf0b75dfccc3 not found: ID does not exist" Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.175330 4676 scope.go:117] "RemoveContainer" containerID="53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29" Jan 24 00:26:27 crc kubenswrapper[4676]: E0124 00:26:27.175631 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29\": container with ID starting with 53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29 not found: ID does not exist" containerID="53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29" Jan 24 00:26:27 crc kubenswrapper[4676]: I0124 00:26:27.175651 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29"} err="failed to get container status \"53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29\": rpc error: code = NotFound desc = could not find container \"53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29\": container with ID starting with 53310433eaa7d3cfca542e7eeab8e064ca40864427d91517b1468dac71afcd29 not found: ID does not exist" Jan 24 00:26:28 crc kubenswrapper[4676]: I0124 00:26:28.271183 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" path="/var/lib/kubelet/pods/cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa/volumes" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.127991 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mfnp9"] Jan 24 00:26:35 crc kubenswrapper[4676]: E0124 00:26:35.130020 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" containerName="init" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.130136 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" containerName="init" Jan 24 00:26:35 crc kubenswrapper[4676]: E0124 00:26:35.130355 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" containerName="dnsmasq-dns" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.130480 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" containerName="dnsmasq-dns" Jan 24 00:26:35 crc kubenswrapper[4676]: E0124 00:26:35.130572 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" containerName="dnsmasq-dns" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.130664 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" containerName="dnsmasq-dns" Jan 24 00:26:35 crc kubenswrapper[4676]: E0124 00:26:35.130754 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" containerName="init" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.130834 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" containerName="init" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.131194 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3231ad-bc2c-4a51-813d-cfd1a11c4fc4" containerName="dnsmasq-dns" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.131372 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd7a87a4-c4de-4fa5-ad41-c9f43405b2aa" containerName="dnsmasq-dns" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.133150 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.136863 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mfnp9"] Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.146876 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-utilities\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.147092 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdg8\" (UniqueName: \"kubernetes.io/projected/fde71182-237b-4e5e-9acd-7c063187e9d9-kube-api-access-cwdg8\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.147303 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-catalog-content\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.249775 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-catalog-content\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.249899 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-utilities\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.249932 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwdg8\" (UniqueName: \"kubernetes.io/projected/fde71182-237b-4e5e-9acd-7c063187e9d9-kube-api-access-cwdg8\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.250583 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-catalog-content\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.250607 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-utilities\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.273154 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwdg8\" (UniqueName: \"kubernetes.io/projected/fde71182-237b-4e5e-9acd-7c063187e9d9-kube-api-access-cwdg8\") pod \"redhat-operators-mfnp9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.458046 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:35 crc kubenswrapper[4676]: I0124 00:26:35.905752 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mfnp9"] Jan 24 00:26:36 crc kubenswrapper[4676]: I0124 00:26:36.205018 4676 generic.go:334] "Generic (PLEG): container finished" podID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerID="b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5" exitCode=0 Jan 24 00:26:36 crc kubenswrapper[4676]: I0124 00:26:36.205185 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfnp9" event={"ID":"fde71182-237b-4e5e-9acd-7c063187e9d9","Type":"ContainerDied","Data":"b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5"} Jan 24 00:26:36 crc kubenswrapper[4676]: I0124 00:26:36.206456 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfnp9" event={"ID":"fde71182-237b-4e5e-9acd-7c063187e9d9","Type":"ContainerStarted","Data":"3a549d24e368d493375a69f5349ff827828a9aa43aab31f6ce9fe3d410a59e73"} Jan 24 00:26:38 crc kubenswrapper[4676]: I0124 00:26:38.225316 4676 generic.go:334] "Generic (PLEG): container finished" podID="c162e478-58e3-4a83-97cb-29887613c1aa" containerID="e8dd570456f163b9ffc9b717def36eea3b68755520281c3ad977128488285d1c" exitCode=0 Jan 24 00:26:38 crc kubenswrapper[4676]: I0124 00:26:38.225454 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c162e478-58e3-4a83-97cb-29887613c1aa","Type":"ContainerDied","Data":"e8dd570456f163b9ffc9b717def36eea3b68755520281c3ad977128488285d1c"} Jan 24 00:26:38 crc kubenswrapper[4676]: I0124 00:26:38.229211 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfnp9" event={"ID":"fde71182-237b-4e5e-9acd-7c063187e9d9","Type":"ContainerStarted","Data":"840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab"} Jan 24 00:26:38 crc kubenswrapper[4676]: I0124 00:26:38.233647 4676 generic.go:334] "Generic (PLEG): container finished" podID="a2df1d42-fa93-4771-ba77-1c27f820b298" containerID="4622ad5ec55e893e88aecaf78be207183e4c4ff1e3645b53366aa4945663cc45" exitCode=0 Jan 24 00:26:38 crc kubenswrapper[4676]: I0124 00:26:38.233703 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2df1d42-fa93-4771-ba77-1c27f820b298","Type":"ContainerDied","Data":"4622ad5ec55e893e88aecaf78be207183e4c4ff1e3645b53366aa4945663cc45"} Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.252791 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2df1d42-fa93-4771-ba77-1c27f820b298","Type":"ContainerStarted","Data":"14f5b6f59284fc6ae60eca31583a70f5c98586f73492946534537fd1e26f3a40"} Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.253257 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.254901 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c162e478-58e3-4a83-97cb-29887613c1aa","Type":"ContainerStarted","Data":"16b670cb5e6da50b2aaba95552fa1fbf22bc6d2fdaafcc881aec602e2df9dbda"} Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.255263 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.288812 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.288791512 podStartE2EDuration="36.288791512s" podCreationTimestamp="2026-01-24 00:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:26:39.273242508 +0000 UTC m=+1383.303213509" watchObservedRunningTime="2026-01-24 00:26:39.288791512 +0000 UTC m=+1383.318762523" Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.305836 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.305818207 podStartE2EDuration="37.305818207s" podCreationTimestamp="2026-01-24 00:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:26:39.29753582 +0000 UTC m=+1383.327506821" watchObservedRunningTime="2026-01-24 00:26:39.305818207 +0000 UTC m=+1383.335789208" Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.363760 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.363819 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.363871 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.364744 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c0fdc4fa29c1a85e06a8e0b7d6899a3299af1dbe5c0f08e67bee66057d6c55e"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:26:39 crc kubenswrapper[4676]: I0124 00:26:39.364808 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://6c0fdc4fa29c1a85e06a8e0b7d6899a3299af1dbe5c0f08e67bee66057d6c55e" gracePeriod=600 Jan 24 00:26:41 crc kubenswrapper[4676]: I0124 00:26:41.273804 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="6c0fdc4fa29c1a85e06a8e0b7d6899a3299af1dbe5c0f08e67bee66057d6c55e" exitCode=0 Jan 24 00:26:41 crc kubenswrapper[4676]: I0124 00:26:41.273894 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"6c0fdc4fa29c1a85e06a8e0b7d6899a3299af1dbe5c0f08e67bee66057d6c55e"} Jan 24 00:26:41 crc kubenswrapper[4676]: I0124 00:26:41.274144 4676 scope.go:117] "RemoveContainer" containerID="f13744ab61f6ff84c30249dfd3e19836649d7bb6c4e4a3db144939c565fd684d" Jan 24 00:26:42 crc kubenswrapper[4676]: I0124 00:26:42.287840 4676 generic.go:334] "Generic (PLEG): container finished" podID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerID="840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab" exitCode=0 Jan 24 00:26:42 crc kubenswrapper[4676]: I0124 00:26:42.287894 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfnp9" event={"ID":"fde71182-237b-4e5e-9acd-7c063187e9d9","Type":"ContainerDied","Data":"840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab"} Jan 24 00:26:43 crc kubenswrapper[4676]: I0124 00:26:43.316160 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfnp9" event={"ID":"fde71182-237b-4e5e-9acd-7c063187e9d9","Type":"ContainerStarted","Data":"e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84"} Jan 24 00:26:43 crc kubenswrapper[4676]: I0124 00:26:43.325607 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751"} Jan 24 00:26:43 crc kubenswrapper[4676]: I0124 00:26:43.340676 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mfnp9" podStartSLOduration=1.671082754 podStartE2EDuration="8.34066231s" podCreationTimestamp="2026-01-24 00:26:35 +0000 UTC" firstStartedPulling="2026-01-24 00:26:36.206365253 +0000 UTC m=+1380.236336254" lastFinishedPulling="2026-01-24 00:26:42.875944799 +0000 UTC m=+1386.905915810" observedRunningTime="2026-01-24 00:26:43.336047359 +0000 UTC m=+1387.366018360" watchObservedRunningTime="2026-01-24 00:26:43.34066231 +0000 UTC m=+1387.370633311" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.474535 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd"] Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.476478 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.479992 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.480941 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.484668 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.485694 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd"] Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.485943 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.534587 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.534676 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.534705 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrdb\" (UniqueName: \"kubernetes.io/projected/15cfbdc6-1f3b-49a5-8822-c4af1e686731-kube-api-access-vcrdb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.534730 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.637638 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.637725 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.637756 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrdb\" (UniqueName: \"kubernetes.io/projected/15cfbdc6-1f3b-49a5-8822-c4af1e686731-kube-api-access-vcrdb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.637790 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.644404 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.644510 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.658067 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.658272 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrdb\" (UniqueName: \"kubernetes.io/projected/15cfbdc6-1f3b-49a5-8822-c4af1e686731-kube-api-access-vcrdb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:44 crc kubenswrapper[4676]: I0124 00:26:44.792407 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:26:45 crc kubenswrapper[4676]: I0124 00:26:45.437554 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd"] Jan 24 00:26:45 crc kubenswrapper[4676]: I0124 00:26:45.459564 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:45 crc kubenswrapper[4676]: I0124 00:26:45.459618 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:26:45 crc kubenswrapper[4676]: W0124 00:26:45.501950 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15cfbdc6_1f3b_49a5_8822_c4af1e686731.slice/crio-bad7f1fcff1a502c1a012676a123e6a6f137a9281f0ba2ea26bfe6c7df095d06 WatchSource:0}: Error finding container bad7f1fcff1a502c1a012676a123e6a6f137a9281f0ba2ea26bfe6c7df095d06: Status 404 returned error can't find the container with id bad7f1fcff1a502c1a012676a123e6a6f137a9281f0ba2ea26bfe6c7df095d06 Jan 24 00:26:46 crc kubenswrapper[4676]: I0124 00:26:46.360230 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" event={"ID":"15cfbdc6-1f3b-49a5-8822-c4af1e686731","Type":"ContainerStarted","Data":"bad7f1fcff1a502c1a012676a123e6a6f137a9281f0ba2ea26bfe6c7df095d06"} Jan 24 00:26:46 crc kubenswrapper[4676]: I0124 00:26:46.547116 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mfnp9" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="registry-server" probeResult="failure" output=< Jan 24 00:26:46 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:26:46 crc kubenswrapper[4676]: > Jan 24 00:26:53 crc kubenswrapper[4676]: I0124 00:26:53.244773 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 24 00:26:54 crc kubenswrapper[4676]: I0124 00:26:54.341923 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 24 00:26:56 crc kubenswrapper[4676]: I0124 00:26:56.537811 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mfnp9" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="registry-server" probeResult="failure" output=< Jan 24 00:26:56 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:26:56 crc kubenswrapper[4676]: > Jan 24 00:26:57 crc kubenswrapper[4676]: I0124 00:26:57.494975 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" event={"ID":"15cfbdc6-1f3b-49a5-8822-c4af1e686731","Type":"ContainerStarted","Data":"d3abe699a1e3161e3d65d105949518b56348bef4880f815312f858cc86bdc39a"} Jan 24 00:26:57 crc kubenswrapper[4676]: I0124 00:26:57.523992 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" podStartSLOduration=2.053822696 podStartE2EDuration="13.523973088s" podCreationTimestamp="2026-01-24 00:26:44 +0000 UTC" firstStartedPulling="2026-01-24 00:26:45.508075258 +0000 UTC m=+1389.538046259" lastFinishedPulling="2026-01-24 00:26:56.97822565 +0000 UTC m=+1401.008196651" observedRunningTime="2026-01-24 00:26:57.52191933 +0000 UTC m=+1401.551890351" watchObservedRunningTime="2026-01-24 00:26:57.523973088 +0000 UTC m=+1401.553944099" Jan 24 00:27:05 crc kubenswrapper[4676]: I0124 00:27:05.509993 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:27:05 crc kubenswrapper[4676]: I0124 00:27:05.572855 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:27:06 crc kubenswrapper[4676]: I0124 00:27:06.338170 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mfnp9"] Jan 24 00:27:06 crc kubenswrapper[4676]: I0124 00:27:06.577178 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mfnp9" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="registry-server" containerID="cri-o://e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84" gracePeriod=2 Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.085559 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.093035 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwdg8\" (UniqueName: \"kubernetes.io/projected/fde71182-237b-4e5e-9acd-7c063187e9d9-kube-api-access-cwdg8\") pod \"fde71182-237b-4e5e-9acd-7c063187e9d9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.093124 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-utilities\") pod \"fde71182-237b-4e5e-9acd-7c063187e9d9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.093154 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-catalog-content\") pod \"fde71182-237b-4e5e-9acd-7c063187e9d9\" (UID: \"fde71182-237b-4e5e-9acd-7c063187e9d9\") " Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.095048 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-utilities" (OuterVolumeSpecName: "utilities") pod "fde71182-237b-4e5e-9acd-7c063187e9d9" (UID: "fde71182-237b-4e5e-9acd-7c063187e9d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.105946 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde71182-237b-4e5e-9acd-7c063187e9d9-kube-api-access-cwdg8" (OuterVolumeSpecName: "kube-api-access-cwdg8") pod "fde71182-237b-4e5e-9acd-7c063187e9d9" (UID: "fde71182-237b-4e5e-9acd-7c063187e9d9"). InnerVolumeSpecName "kube-api-access-cwdg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.199314 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwdg8\" (UniqueName: \"kubernetes.io/projected/fde71182-237b-4e5e-9acd-7c063187e9d9-kube-api-access-cwdg8\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.199362 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.252767 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fde71182-237b-4e5e-9acd-7c063187e9d9" (UID: "fde71182-237b-4e5e-9acd-7c063187e9d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.301719 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde71182-237b-4e5e-9acd-7c063187e9d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.589364 4676 generic.go:334] "Generic (PLEG): container finished" podID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerID="e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84" exitCode=0 Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.589404 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfnp9" event={"ID":"fde71182-237b-4e5e-9acd-7c063187e9d9","Type":"ContainerDied","Data":"e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84"} Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.589444 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfnp9" event={"ID":"fde71182-237b-4e5e-9acd-7c063187e9d9","Type":"ContainerDied","Data":"3a549d24e368d493375a69f5349ff827828a9aa43aab31f6ce9fe3d410a59e73"} Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.589469 4676 scope.go:117] "RemoveContainer" containerID="e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.589465 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfnp9" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.627663 4676 scope.go:117] "RemoveContainer" containerID="840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.630599 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mfnp9"] Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.639322 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mfnp9"] Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.650893 4676 scope.go:117] "RemoveContainer" containerID="b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.700415 4676 scope.go:117] "RemoveContainer" containerID="e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84" Jan 24 00:27:07 crc kubenswrapper[4676]: E0124 00:27:07.701000 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84\": container with ID starting with e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84 not found: ID does not exist" containerID="e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.701133 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84"} err="failed to get container status \"e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84\": rpc error: code = NotFound desc = could not find container \"e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84\": container with ID starting with e055fbf4304941e63aaafead3ca8b744c531ee533151d9a0bf016a4015b5bf84 not found: ID does not exist" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.701220 4676 scope.go:117] "RemoveContainer" containerID="840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab" Jan 24 00:27:07 crc kubenswrapper[4676]: E0124 00:27:07.701593 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab\": container with ID starting with 840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab not found: ID does not exist" containerID="840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.701624 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab"} err="failed to get container status \"840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab\": rpc error: code = NotFound desc = could not find container \"840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab\": container with ID starting with 840682d1bbe8575a5b2335cc56507fb17db2824f08a0d8e5489d08c656cbb9ab not found: ID does not exist" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.701645 4676 scope.go:117] "RemoveContainer" containerID="b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5" Jan 24 00:27:07 crc kubenswrapper[4676]: E0124 00:27:07.702669 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5\": container with ID starting with b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5 not found: ID does not exist" containerID="b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5" Jan 24 00:27:07 crc kubenswrapper[4676]: I0124 00:27:07.702754 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5"} err="failed to get container status \"b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5\": rpc error: code = NotFound desc = could not find container \"b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5\": container with ID starting with b4efdc97c998549b295a5310ebae1d71becdddc1045908ee456a712d7b3212b5 not found: ID does not exist" Jan 24 00:27:08 crc kubenswrapper[4676]: I0124 00:27:08.275069 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" path="/var/lib/kubelet/pods/fde71182-237b-4e5e-9acd-7c063187e9d9/volumes" Jan 24 00:27:09 crc kubenswrapper[4676]: I0124 00:27:09.614153 4676 generic.go:334] "Generic (PLEG): container finished" podID="15cfbdc6-1f3b-49a5-8822-c4af1e686731" containerID="d3abe699a1e3161e3d65d105949518b56348bef4880f815312f858cc86bdc39a" exitCode=0 Jan 24 00:27:09 crc kubenswrapper[4676]: I0124 00:27:09.614254 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" event={"ID":"15cfbdc6-1f3b-49a5-8822-c4af1e686731","Type":"ContainerDied","Data":"d3abe699a1e3161e3d65d105949518b56348bef4880f815312f858cc86bdc39a"} Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.075598 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.278017 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-ssh-key-openstack-edpm-ipam\") pod \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.278128 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcrdb\" (UniqueName: \"kubernetes.io/projected/15cfbdc6-1f3b-49a5-8822-c4af1e686731-kube-api-access-vcrdb\") pod \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.278196 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-repo-setup-combined-ca-bundle\") pod \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.279264 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-inventory\") pod \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\" (UID: \"15cfbdc6-1f3b-49a5-8822-c4af1e686731\") " Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.286316 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cfbdc6-1f3b-49a5-8822-c4af1e686731-kube-api-access-vcrdb" (OuterVolumeSpecName: "kube-api-access-vcrdb") pod "15cfbdc6-1f3b-49a5-8822-c4af1e686731" (UID: "15cfbdc6-1f3b-49a5-8822-c4af1e686731"). InnerVolumeSpecName "kube-api-access-vcrdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.287501 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "15cfbdc6-1f3b-49a5-8822-c4af1e686731" (UID: "15cfbdc6-1f3b-49a5-8822-c4af1e686731"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.313594 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-inventory" (OuterVolumeSpecName: "inventory") pod "15cfbdc6-1f3b-49a5-8822-c4af1e686731" (UID: "15cfbdc6-1f3b-49a5-8822-c4af1e686731"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.315114 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15cfbdc6-1f3b-49a5-8822-c4af1e686731" (UID: "15cfbdc6-1f3b-49a5-8822-c4af1e686731"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.382481 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.382519 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcrdb\" (UniqueName: \"kubernetes.io/projected/15cfbdc6-1f3b-49a5-8822-c4af1e686731-kube-api-access-vcrdb\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.382531 4676 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.382544 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15cfbdc6-1f3b-49a5-8822-c4af1e686731-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.637797 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" event={"ID":"15cfbdc6-1f3b-49a5-8822-c4af1e686731","Type":"ContainerDied","Data":"bad7f1fcff1a502c1a012676a123e6a6f137a9281f0ba2ea26bfe6c7df095d06"} Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.637843 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad7f1fcff1a502c1a012676a123e6a6f137a9281f0ba2ea26bfe6c7df095d06" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.637925 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.781533 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8"] Jan 24 00:27:11 crc kubenswrapper[4676]: E0124 00:27:11.782673 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="extract-content" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.782722 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="extract-content" Jan 24 00:27:11 crc kubenswrapper[4676]: E0124 00:27:11.782769 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="extract-utilities" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.782784 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="extract-utilities" Jan 24 00:27:11 crc kubenswrapper[4676]: E0124 00:27:11.782815 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="registry-server" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.782828 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="registry-server" Jan 24 00:27:11 crc kubenswrapper[4676]: E0124 00:27:11.782876 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cfbdc6-1f3b-49a5-8822-c4af1e686731" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.782894 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cfbdc6-1f3b-49a5-8822-c4af1e686731" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.783332 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cfbdc6-1f3b-49a5-8822-c4af1e686731" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.783372 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde71182-237b-4e5e-9acd-7c063187e9d9" containerName="registry-server" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.784534 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.789630 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.789737 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkrn\" (UniqueName: \"kubernetes.io/projected/19b712ec-28a7-419f-9f09-7d0b0ecbf747-kube-api-access-slkrn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.789938 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.794069 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.794433 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.794500 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.795067 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.799612 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8"] Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.892141 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.892281 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slkrn\" (UniqueName: \"kubernetes.io/projected/19b712ec-28a7-419f-9f09-7d0b0ecbf747-kube-api-access-slkrn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.892368 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.898052 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.899192 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:11 crc kubenswrapper[4676]: I0124 00:27:11.914707 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slkrn\" (UniqueName: \"kubernetes.io/projected/19b712ec-28a7-419f-9f09-7d0b0ecbf747-kube-api-access-slkrn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qx6r8\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:12 crc kubenswrapper[4676]: I0124 00:27:12.112016 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:12 crc kubenswrapper[4676]: I0124 00:27:12.657735 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8"] Jan 24 00:27:13 crc kubenswrapper[4676]: I0124 00:27:13.672981 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" event={"ID":"19b712ec-28a7-419f-9f09-7d0b0ecbf747","Type":"ContainerStarted","Data":"40f8aff558f949e11983226e96660908090ee1f149c1d729ce8f538212ca4716"} Jan 24 00:27:13 crc kubenswrapper[4676]: I0124 00:27:13.673353 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" event={"ID":"19b712ec-28a7-419f-9f09-7d0b0ecbf747","Type":"ContainerStarted","Data":"288f568b7f17ebcd72b74c000c5d8e959c55a88bbca9c451f6aac55cdca9fc0c"} Jan 24 00:27:13 crc kubenswrapper[4676]: I0124 00:27:13.717332 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" podStartSLOduration=2.2914809800000002 podStartE2EDuration="2.717307073s" podCreationTimestamp="2026-01-24 00:27:11 +0000 UTC" firstStartedPulling="2026-01-24 00:27:12.67122773 +0000 UTC m=+1416.701198731" lastFinishedPulling="2026-01-24 00:27:13.097053813 +0000 UTC m=+1417.127024824" observedRunningTime="2026-01-24 00:27:13.70457896 +0000 UTC m=+1417.734550001" watchObservedRunningTime="2026-01-24 00:27:13.717307073 +0000 UTC m=+1417.747278114" Jan 24 00:27:16 crc kubenswrapper[4676]: I0124 00:27:16.717326 4676 generic.go:334] "Generic (PLEG): container finished" podID="19b712ec-28a7-419f-9f09-7d0b0ecbf747" containerID="40f8aff558f949e11983226e96660908090ee1f149c1d729ce8f538212ca4716" exitCode=0 Jan 24 00:27:16 crc kubenswrapper[4676]: I0124 00:27:16.717437 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" event={"ID":"19b712ec-28a7-419f-9f09-7d0b0ecbf747","Type":"ContainerDied","Data":"40f8aff558f949e11983226e96660908090ee1f149c1d729ce8f538212ca4716"} Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.136351 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.272477 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-inventory\") pod \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.272920 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slkrn\" (UniqueName: \"kubernetes.io/projected/19b712ec-28a7-419f-9f09-7d0b0ecbf747-kube-api-access-slkrn\") pod \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.272969 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-ssh-key-openstack-edpm-ipam\") pod \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\" (UID: \"19b712ec-28a7-419f-9f09-7d0b0ecbf747\") " Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.292359 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b712ec-28a7-419f-9f09-7d0b0ecbf747-kube-api-access-slkrn" (OuterVolumeSpecName: "kube-api-access-slkrn") pod "19b712ec-28a7-419f-9f09-7d0b0ecbf747" (UID: "19b712ec-28a7-419f-9f09-7d0b0ecbf747"). InnerVolumeSpecName "kube-api-access-slkrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.305867 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-inventory" (OuterVolumeSpecName: "inventory") pod "19b712ec-28a7-419f-9f09-7d0b0ecbf747" (UID: "19b712ec-28a7-419f-9f09-7d0b0ecbf747"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.315494 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "19b712ec-28a7-419f-9f09-7d0b0ecbf747" (UID: "19b712ec-28a7-419f-9f09-7d0b0ecbf747"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.375418 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.375520 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slkrn\" (UniqueName: \"kubernetes.io/projected/19b712ec-28a7-419f-9f09-7d0b0ecbf747-kube-api-access-slkrn\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.375573 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19b712ec-28a7-419f-9f09-7d0b0ecbf747-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.742339 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" event={"ID":"19b712ec-28a7-419f-9f09-7d0b0ecbf747","Type":"ContainerDied","Data":"288f568b7f17ebcd72b74c000c5d8e959c55a88bbca9c451f6aac55cdca9fc0c"} Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.742444 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="288f568b7f17ebcd72b74c000c5d8e959c55a88bbca9c451f6aac55cdca9fc0c" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.742503 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qx6r8" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.826697 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp"] Jan 24 00:27:18 crc kubenswrapper[4676]: E0124 00:27:18.827085 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19b712ec-28a7-419f-9f09-7d0b0ecbf747" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.827100 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="19b712ec-28a7-419f-9f09-7d0b0ecbf747" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.827278 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="19b712ec-28a7-419f-9f09-7d0b0ecbf747" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.827895 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.833263 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.833597 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.833710 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.833711 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.862227 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp"] Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.989940 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.990047 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cctcl\" (UniqueName: \"kubernetes.io/projected/1fd4e1f4-0772-493a-b929-6e93470f9abf-kube-api-access-cctcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.990081 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:18 crc kubenswrapper[4676]: I0124 00:27:18.990153 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.091681 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.091986 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.092084 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cctcl\" (UniqueName: \"kubernetes.io/projected/1fd4e1f4-0772-493a-b929-6e93470f9abf-kube-api-access-cctcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.092169 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.098296 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.098447 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.101416 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.114062 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cctcl\" (UniqueName: \"kubernetes.io/projected/1fd4e1f4-0772-493a-b929-6e93470f9abf-kube-api-access-cctcl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.147500 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:27:19 crc kubenswrapper[4676]: I0124 00:27:19.779667 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp"] Jan 24 00:27:20 crc kubenswrapper[4676]: I0124 00:27:20.763301 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" event={"ID":"1fd4e1f4-0772-493a-b929-6e93470f9abf","Type":"ContainerStarted","Data":"990efd15c2bf2faf18318ad60df1588709f1e3dfd780fd07d20dad8fa6ff65f7"} Jan 24 00:27:20 crc kubenswrapper[4676]: I0124 00:27:20.763754 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" event={"ID":"1fd4e1f4-0772-493a-b929-6e93470f9abf","Type":"ContainerStarted","Data":"b1076e008af5cd31c6b2d2ff588ee49398925df4037135eafcda7ef3f9b4a900"} Jan 24 00:27:20 crc kubenswrapper[4676]: I0124 00:27:20.787510 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" podStartSLOduration=2.295037624 podStartE2EDuration="2.787482454s" podCreationTimestamp="2026-01-24 00:27:18 +0000 UTC" firstStartedPulling="2026-01-24 00:27:19.793972178 +0000 UTC m=+1423.823943199" lastFinishedPulling="2026-01-24 00:27:20.286417008 +0000 UTC m=+1424.316388029" observedRunningTime="2026-01-24 00:27:20.782877763 +0000 UTC m=+1424.812848774" watchObservedRunningTime="2026-01-24 00:27:20.787482454 +0000 UTC m=+1424.817453455" Jan 24 00:27:40 crc kubenswrapper[4676]: I0124 00:27:40.062832 4676 scope.go:117] "RemoveContainer" containerID="d2ed0ffdbd6bb9c785556f969ad9aeabbea68ca6c96b8f79735c762793724bfe" Jan 24 00:27:40 crc kubenswrapper[4676]: I0124 00:27:40.104503 4676 scope.go:117] "RemoveContainer" containerID="f59e9569d48da0357305a5589f2cf90a01ccd73d4a805dd25787bb7bb6ebfba6" Jan 24 00:28:40 crc kubenswrapper[4676]: I0124 00:28:40.317710 4676 scope.go:117] "RemoveContainer" containerID="51869c6b7eeadd27a59a7b34889ba0093a949ad78c0444eec3606959ac5ba339" Jan 24 00:29:09 crc kubenswrapper[4676]: I0124 00:29:09.364418 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:29:09 crc kubenswrapper[4676]: I0124 00:29:09.365118 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:29:39 crc kubenswrapper[4676]: I0124 00:29:39.364447 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:29:39 crc kubenswrapper[4676]: I0124 00:29:39.365036 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:29:40 crc kubenswrapper[4676]: I0124 00:29:40.406103 4676 scope.go:117] "RemoveContainer" containerID="1da12014ab624fa95aa24ddc28c008acbe5db6c02d4d9e96a348452a0b9b9e38" Jan 24 00:29:40 crc kubenswrapper[4676]: I0124 00:29:40.443345 4676 scope.go:117] "RemoveContainer" containerID="3c1cdcc2ffd783555c7e0583ecb6ac91d2d0c15034969f7f5f6f8dba79f95867" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.188589 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj"] Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.191857 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.199955 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.200274 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.218178 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc94ad7e-9d49-4100-a653-8957ee663195-config-volume\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.218831 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjs8\" (UniqueName: \"kubernetes.io/projected/fc94ad7e-9d49-4100-a653-8957ee663195-kube-api-access-vzjs8\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.218873 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc94ad7e-9d49-4100-a653-8957ee663195-secret-volume\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.229997 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj"] Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.320421 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc94ad7e-9d49-4100-a653-8957ee663195-config-volume\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.320550 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjs8\" (UniqueName: \"kubernetes.io/projected/fc94ad7e-9d49-4100-a653-8957ee663195-kube-api-access-vzjs8\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.320577 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc94ad7e-9d49-4100-a653-8957ee663195-secret-volume\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.321498 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc94ad7e-9d49-4100-a653-8957ee663195-config-volume\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.335005 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc94ad7e-9d49-4100-a653-8957ee663195-secret-volume\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.340413 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjs8\" (UniqueName: \"kubernetes.io/projected/fc94ad7e-9d49-4100-a653-8957ee663195-kube-api-access-vzjs8\") pod \"collect-profiles-29486910-w5nqj\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:00 crc kubenswrapper[4676]: I0124 00:30:00.528557 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.023084 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj"] Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.521364 4676 generic.go:334] "Generic (PLEG): container finished" podID="fc94ad7e-9d49-4100-a653-8957ee663195" containerID="8db303a059dedc6aec4dda92c8f1bdb2072d4f7fcc3f6c35461f6c01badf580c" exitCode=0 Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.521462 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" event={"ID":"fc94ad7e-9d49-4100-a653-8957ee663195","Type":"ContainerDied","Data":"8db303a059dedc6aec4dda92c8f1bdb2072d4f7fcc3f6c35461f6c01badf580c"} Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.521717 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" event={"ID":"fc94ad7e-9d49-4100-a653-8957ee663195","Type":"ContainerStarted","Data":"2baa3f0128f1f45531ea208204732c90b4b655af648d15f19a28ea9c106d6c67"} Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.615956 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-69zb6"] Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.618093 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.631673 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69zb6"] Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.752680 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gxkn\" (UniqueName: \"kubernetes.io/projected/6f07f20b-f218-4f5b-9f8a-127ca3b44402-kube-api-access-5gxkn\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.752947 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-utilities\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.753109 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-catalog-content\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.854427 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-catalog-content\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.854649 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gxkn\" (UniqueName: \"kubernetes.io/projected/6f07f20b-f218-4f5b-9f8a-127ca3b44402-kube-api-access-5gxkn\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.854763 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-utilities\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.855038 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-catalog-content\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.855511 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-utilities\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.885153 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gxkn\" (UniqueName: \"kubernetes.io/projected/6f07f20b-f218-4f5b-9f8a-127ca3b44402-kube-api-access-5gxkn\") pod \"redhat-marketplace-69zb6\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:01 crc kubenswrapper[4676]: I0124 00:30:01.951759 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.450739 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69zb6"] Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.534116 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69zb6" event={"ID":"6f07f20b-f218-4f5b-9f8a-127ca3b44402","Type":"ContainerStarted","Data":"fb5a366881b73b432a758371a8751ca97872bde3cb9d50974c2e5c2b309a7330"} Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.878554 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.975570 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzjs8\" (UniqueName: \"kubernetes.io/projected/fc94ad7e-9d49-4100-a653-8957ee663195-kube-api-access-vzjs8\") pod \"fc94ad7e-9d49-4100-a653-8957ee663195\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.975667 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc94ad7e-9d49-4100-a653-8957ee663195-secret-volume\") pod \"fc94ad7e-9d49-4100-a653-8957ee663195\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.975812 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc94ad7e-9d49-4100-a653-8957ee663195-config-volume\") pod \"fc94ad7e-9d49-4100-a653-8957ee663195\" (UID: \"fc94ad7e-9d49-4100-a653-8957ee663195\") " Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.977169 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc94ad7e-9d49-4100-a653-8957ee663195-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc94ad7e-9d49-4100-a653-8957ee663195" (UID: "fc94ad7e-9d49-4100-a653-8957ee663195"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.984814 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc94ad7e-9d49-4100-a653-8957ee663195-kube-api-access-vzjs8" (OuterVolumeSpecName: "kube-api-access-vzjs8") pod "fc94ad7e-9d49-4100-a653-8957ee663195" (UID: "fc94ad7e-9d49-4100-a653-8957ee663195"). InnerVolumeSpecName "kube-api-access-vzjs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:30:02 crc kubenswrapper[4676]: I0124 00:30:02.990493 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc94ad7e-9d49-4100-a653-8957ee663195-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc94ad7e-9d49-4100-a653-8957ee663195" (UID: "fc94ad7e-9d49-4100-a653-8957ee663195"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.078884 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzjs8\" (UniqueName: \"kubernetes.io/projected/fc94ad7e-9d49-4100-a653-8957ee663195-kube-api-access-vzjs8\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.078936 4676 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc94ad7e-9d49-4100-a653-8957ee663195-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.078960 4676 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc94ad7e-9d49-4100-a653-8957ee663195-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.550662 4676 generic.go:334] "Generic (PLEG): container finished" podID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerID="1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700" exitCode=0 Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.550739 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69zb6" event={"ID":"6f07f20b-f218-4f5b-9f8a-127ca3b44402","Type":"ContainerDied","Data":"1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700"} Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.553075 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" event={"ID":"fc94ad7e-9d49-4100-a653-8957ee663195","Type":"ContainerDied","Data":"2baa3f0128f1f45531ea208204732c90b4b655af648d15f19a28ea9c106d6c67"} Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.553096 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2baa3f0128f1f45531ea208204732c90b4b655af648d15f19a28ea9c106d6c67" Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.553151 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486910-w5nqj" Jan 24 00:30:03 crc kubenswrapper[4676]: I0124 00:30:03.553439 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:30:05 crc kubenswrapper[4676]: I0124 00:30:05.580244 4676 generic.go:334] "Generic (PLEG): container finished" podID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerID="1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f" exitCode=0 Jan 24 00:30:05 crc kubenswrapper[4676]: I0124 00:30:05.580359 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69zb6" event={"ID":"6f07f20b-f218-4f5b-9f8a-127ca3b44402","Type":"ContainerDied","Data":"1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f"} Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.198415 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rmln4"] Jan 24 00:30:06 crc kubenswrapper[4676]: E0124 00:30:06.199153 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc94ad7e-9d49-4100-a653-8957ee663195" containerName="collect-profiles" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.199173 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc94ad7e-9d49-4100-a653-8957ee663195" containerName="collect-profiles" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.199358 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc94ad7e-9d49-4100-a653-8957ee663195" containerName="collect-profiles" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.201050 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.225733 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmln4"] Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.273800 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-catalog-content\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.273882 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-utilities\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.273986 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg6vm\" (UniqueName: \"kubernetes.io/projected/df68cc49-9d8a-4c50-a379-35a2d56491e7-kube-api-access-mg6vm\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.376914 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-catalog-content\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.377344 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-utilities\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.377559 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg6vm\" (UniqueName: \"kubernetes.io/projected/df68cc49-9d8a-4c50-a379-35a2d56491e7-kube-api-access-mg6vm\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.377686 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-catalog-content\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.378821 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-utilities\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.398008 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg6vm\" (UniqueName: \"kubernetes.io/projected/df68cc49-9d8a-4c50-a379-35a2d56491e7-kube-api-access-mg6vm\") pod \"certified-operators-rmln4\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.586351 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.591242 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69zb6" event={"ID":"6f07f20b-f218-4f5b-9f8a-127ca3b44402","Type":"ContainerStarted","Data":"79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3"} Jan 24 00:30:06 crc kubenswrapper[4676]: I0124 00:30:06.626279 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-69zb6" podStartSLOduration=3.128961409 podStartE2EDuration="5.626258238s" podCreationTimestamp="2026-01-24 00:30:01 +0000 UTC" firstStartedPulling="2026-01-24 00:30:03.553045913 +0000 UTC m=+1587.583016944" lastFinishedPulling="2026-01-24 00:30:06.050342762 +0000 UTC m=+1590.080313773" observedRunningTime="2026-01-24 00:30:06.616699026 +0000 UTC m=+1590.646670037" watchObservedRunningTime="2026-01-24 00:30:06.626258238 +0000 UTC m=+1590.656229239" Jan 24 00:30:07 crc kubenswrapper[4676]: I0124 00:30:07.141283 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmln4"] Jan 24 00:30:07 crc kubenswrapper[4676]: I0124 00:30:07.628307 4676 generic.go:334] "Generic (PLEG): container finished" podID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerID="2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2" exitCode=0 Jan 24 00:30:07 crc kubenswrapper[4676]: I0124 00:30:07.630195 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmln4" event={"ID":"df68cc49-9d8a-4c50-a379-35a2d56491e7","Type":"ContainerDied","Data":"2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2"} Jan 24 00:30:07 crc kubenswrapper[4676]: I0124 00:30:07.630818 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmln4" event={"ID":"df68cc49-9d8a-4c50-a379-35a2d56491e7","Type":"ContainerStarted","Data":"de097375a51280c39d2e2ca9ddfc9fe7339a04c32101fb8241605b662c33ab05"} Jan 24 00:30:08 crc kubenswrapper[4676]: I0124 00:30:08.640609 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmln4" event={"ID":"df68cc49-9d8a-4c50-a379-35a2d56491e7","Type":"ContainerStarted","Data":"368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792"} Jan 24 00:30:09 crc kubenswrapper[4676]: I0124 00:30:09.364582 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:30:09 crc kubenswrapper[4676]: I0124 00:30:09.364673 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:30:09 crc kubenswrapper[4676]: I0124 00:30:09.364743 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:30:09 crc kubenswrapper[4676]: I0124 00:30:09.365967 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:30:09 crc kubenswrapper[4676]: I0124 00:30:09.366082 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" gracePeriod=600 Jan 24 00:30:11 crc kubenswrapper[4676]: I0124 00:30:11.953291 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:11 crc kubenswrapper[4676]: I0124 00:30:11.953972 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:12 crc kubenswrapper[4676]: I0124 00:30:12.015289 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:12 crc kubenswrapper[4676]: I0124 00:30:12.678756 4676 generic.go:334] "Generic (PLEG): container finished" podID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerID="368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792" exitCode=0 Jan 24 00:30:12 crc kubenswrapper[4676]: I0124 00:30:12.678824 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmln4" event={"ID":"df68cc49-9d8a-4c50-a379-35a2d56491e7","Type":"ContainerDied","Data":"368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792"} Jan 24 00:30:12 crc kubenswrapper[4676]: I0124 00:30:12.683685 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" exitCode=0 Jan 24 00:30:12 crc kubenswrapper[4676]: I0124 00:30:12.683790 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751"} Jan 24 00:30:12 crc kubenswrapper[4676]: I0124 00:30:12.683864 4676 scope.go:117] "RemoveContainer" containerID="6c0fdc4fa29c1a85e06a8e0b7d6899a3299af1dbe5c0f08e67bee66057d6c55e" Jan 24 00:30:12 crc kubenswrapper[4676]: I0124 00:30:12.742163 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:13 crc kubenswrapper[4676]: E0124 00:30:13.464981 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:30:13 crc kubenswrapper[4676]: I0124 00:30:13.700742 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:30:13 crc kubenswrapper[4676]: E0124 00:30:13.701227 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:30:14 crc kubenswrapper[4676]: I0124 00:30:14.190846 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69zb6"] Jan 24 00:30:14 crc kubenswrapper[4676]: I0124 00:30:14.711416 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-69zb6" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerName="registry-server" containerID="cri-o://79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3" gracePeriod=2 Jan 24 00:30:14 crc kubenswrapper[4676]: I0124 00:30:14.711759 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmln4" event={"ID":"df68cc49-9d8a-4c50-a379-35a2d56491e7","Type":"ContainerStarted","Data":"e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa"} Jan 24 00:30:14 crc kubenswrapper[4676]: I0124 00:30:14.739637 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rmln4" podStartSLOduration=2.284833619 podStartE2EDuration="8.73961936s" podCreationTimestamp="2026-01-24 00:30:06 +0000 UTC" firstStartedPulling="2026-01-24 00:30:07.631957521 +0000 UTC m=+1591.661928522" lastFinishedPulling="2026-01-24 00:30:14.086743262 +0000 UTC m=+1598.116714263" observedRunningTime="2026-01-24 00:30:14.738695724 +0000 UTC m=+1598.768666735" watchObservedRunningTime="2026-01-24 00:30:14.73961936 +0000 UTC m=+1598.769590361" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.286693 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.356479 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-catalog-content\") pod \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.356722 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gxkn\" (UniqueName: \"kubernetes.io/projected/6f07f20b-f218-4f5b-9f8a-127ca3b44402-kube-api-access-5gxkn\") pod \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.356894 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-utilities\") pod \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\" (UID: \"6f07f20b-f218-4f5b-9f8a-127ca3b44402\") " Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.374493 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-utilities" (OuterVolumeSpecName: "utilities") pod "6f07f20b-f218-4f5b-9f8a-127ca3b44402" (UID: "6f07f20b-f218-4f5b-9f8a-127ca3b44402"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.379550 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f07f20b-f218-4f5b-9f8a-127ca3b44402-kube-api-access-5gxkn" (OuterVolumeSpecName: "kube-api-access-5gxkn") pod "6f07f20b-f218-4f5b-9f8a-127ca3b44402" (UID: "6f07f20b-f218-4f5b-9f8a-127ca3b44402"). InnerVolumeSpecName "kube-api-access-5gxkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.383178 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f07f20b-f218-4f5b-9f8a-127ca3b44402" (UID: "6f07f20b-f218-4f5b-9f8a-127ca3b44402"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.458524 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gxkn\" (UniqueName: \"kubernetes.io/projected/6f07f20b-f218-4f5b-9f8a-127ca3b44402-kube-api-access-5gxkn\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.458555 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.458564 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f07f20b-f218-4f5b-9f8a-127ca3b44402-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.724063 4676 generic.go:334] "Generic (PLEG): container finished" podID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerID="79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3" exitCode=0 Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.724121 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69zb6" event={"ID":"6f07f20b-f218-4f5b-9f8a-127ca3b44402","Type":"ContainerDied","Data":"79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3"} Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.724163 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69zb6" event={"ID":"6f07f20b-f218-4f5b-9f8a-127ca3b44402","Type":"ContainerDied","Data":"fb5a366881b73b432a758371a8751ca97872bde3cb9d50974c2e5c2b309a7330"} Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.724194 4676 scope.go:117] "RemoveContainer" containerID="79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.724365 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69zb6" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.746512 4676 scope.go:117] "RemoveContainer" containerID="1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.791082 4676 scope.go:117] "RemoveContainer" containerID="1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.797963 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69zb6"] Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.819793 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-69zb6"] Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.834002 4676 scope.go:117] "RemoveContainer" containerID="79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3" Jan 24 00:30:15 crc kubenswrapper[4676]: E0124 00:30:15.834343 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3\": container with ID starting with 79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3 not found: ID does not exist" containerID="79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.834387 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3"} err="failed to get container status \"79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3\": rpc error: code = NotFound desc = could not find container \"79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3\": container with ID starting with 79c7387ea68d950659fb07676ebcecdc5c95a40f02a96f9a9d61a62fac3d6be3 not found: ID does not exist" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.834413 4676 scope.go:117] "RemoveContainer" containerID="1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f" Jan 24 00:30:15 crc kubenswrapper[4676]: E0124 00:30:15.834609 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f\": container with ID starting with 1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f not found: ID does not exist" containerID="1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.834630 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f"} err="failed to get container status \"1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f\": rpc error: code = NotFound desc = could not find container \"1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f\": container with ID starting with 1f4c7897af2181f99f84074c3f9548df41f7014f5581145d277c051dea39a12f not found: ID does not exist" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.834645 4676 scope.go:117] "RemoveContainer" containerID="1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700" Jan 24 00:30:15 crc kubenswrapper[4676]: E0124 00:30:15.834812 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700\": container with ID starting with 1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700 not found: ID does not exist" containerID="1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700" Jan 24 00:30:15 crc kubenswrapper[4676]: I0124 00:30:15.834837 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700"} err="failed to get container status \"1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700\": rpc error: code = NotFound desc = could not find container \"1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700\": container with ID starting with 1df2094ba92d96207746ec06cc7516275de92ffeb2ec963064e4a5ed06a53700 not found: ID does not exist" Jan 24 00:30:16 crc kubenswrapper[4676]: I0124 00:30:16.269687 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" path="/var/lib/kubelet/pods/6f07f20b-f218-4f5b-9f8a-127ca3b44402/volumes" Jan 24 00:30:16 crc kubenswrapper[4676]: I0124 00:30:16.587703 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:16 crc kubenswrapper[4676]: I0124 00:30:16.588150 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:16 crc kubenswrapper[4676]: I0124 00:30:16.646933 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:24 crc kubenswrapper[4676]: I0124 00:30:24.256077 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:30:24 crc kubenswrapper[4676]: E0124 00:30:24.257284 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:30:26 crc kubenswrapper[4676]: I0124 00:30:26.647798 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:26 crc kubenswrapper[4676]: I0124 00:30:26.699031 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmln4"] Jan 24 00:30:26 crc kubenswrapper[4676]: I0124 00:30:26.861953 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rmln4" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerName="registry-server" containerID="cri-o://e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa" gracePeriod=2 Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.307929 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.395703 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-utilities\") pod \"df68cc49-9d8a-4c50-a379-35a2d56491e7\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.395779 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg6vm\" (UniqueName: \"kubernetes.io/projected/df68cc49-9d8a-4c50-a379-35a2d56491e7-kube-api-access-mg6vm\") pod \"df68cc49-9d8a-4c50-a379-35a2d56491e7\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.395864 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-catalog-content\") pod \"df68cc49-9d8a-4c50-a379-35a2d56491e7\" (UID: \"df68cc49-9d8a-4c50-a379-35a2d56491e7\") " Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.396552 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-utilities" (OuterVolumeSpecName: "utilities") pod "df68cc49-9d8a-4c50-a379-35a2d56491e7" (UID: "df68cc49-9d8a-4c50-a379-35a2d56491e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.403049 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.404655 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df68cc49-9d8a-4c50-a379-35a2d56491e7-kube-api-access-mg6vm" (OuterVolumeSpecName: "kube-api-access-mg6vm") pod "df68cc49-9d8a-4c50-a379-35a2d56491e7" (UID: "df68cc49-9d8a-4c50-a379-35a2d56491e7"). InnerVolumeSpecName "kube-api-access-mg6vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.452649 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df68cc49-9d8a-4c50-a379-35a2d56491e7" (UID: "df68cc49-9d8a-4c50-a379-35a2d56491e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.504334 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df68cc49-9d8a-4c50-a379-35a2d56491e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.504363 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg6vm\" (UniqueName: \"kubernetes.io/projected/df68cc49-9d8a-4c50-a379-35a2d56491e7-kube-api-access-mg6vm\") on node \"crc\" DevicePath \"\"" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.876929 4676 generic.go:334] "Generic (PLEG): container finished" podID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerID="e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa" exitCode=0 Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.877019 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmln4" event={"ID":"df68cc49-9d8a-4c50-a379-35a2d56491e7","Type":"ContainerDied","Data":"e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa"} Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.877047 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmln4" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.877292 4676 scope.go:117] "RemoveContainer" containerID="e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.877260 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmln4" event={"ID":"df68cc49-9d8a-4c50-a379-35a2d56491e7","Type":"ContainerDied","Data":"de097375a51280c39d2e2ca9ddfc9fe7339a04c32101fb8241605b662c33ab05"} Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.898511 4676 scope.go:117] "RemoveContainer" containerID="368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.927532 4676 scope.go:117] "RemoveContainer" containerID="2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.941553 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmln4"] Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.953500 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rmln4"] Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.983275 4676 scope.go:117] "RemoveContainer" containerID="e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa" Jan 24 00:30:27 crc kubenswrapper[4676]: E0124 00:30:27.983907 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa\": container with ID starting with e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa not found: ID does not exist" containerID="e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.983957 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa"} err="failed to get container status \"e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa\": rpc error: code = NotFound desc = could not find container \"e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa\": container with ID starting with e82244cf7f7fdf513932fe8199f8fe13f14ccd1a9b143f6c05a4a817e6564dfa not found: ID does not exist" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.983990 4676 scope.go:117] "RemoveContainer" containerID="368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792" Jan 24 00:30:27 crc kubenswrapper[4676]: E0124 00:30:27.984755 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792\": container with ID starting with 368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792 not found: ID does not exist" containerID="368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.984786 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792"} err="failed to get container status \"368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792\": rpc error: code = NotFound desc = could not find container \"368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792\": container with ID starting with 368d2b86f772740717cfdf15a60a1fd8f7b0f9d3fd68fb9aa7cef21ebb376792 not found: ID does not exist" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.984809 4676 scope.go:117] "RemoveContainer" containerID="2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2" Jan 24 00:30:27 crc kubenswrapper[4676]: E0124 00:30:27.985093 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2\": container with ID starting with 2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2 not found: ID does not exist" containerID="2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2" Jan 24 00:30:27 crc kubenswrapper[4676]: I0124 00:30:27.985124 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2"} err="failed to get container status \"2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2\": rpc error: code = NotFound desc = could not find container \"2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2\": container with ID starting with 2b00bdedbfb520a4d73d3abd4a42b5761963be56c158ebdc25278c30b6bd67d2 not found: ID does not exist" Jan 24 00:30:28 crc kubenswrapper[4676]: I0124 00:30:28.266612 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" path="/var/lib/kubelet/pods/df68cc49-9d8a-4c50-a379-35a2d56491e7/volumes" Jan 24 00:30:37 crc kubenswrapper[4676]: I0124 00:30:37.255074 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:30:37 crc kubenswrapper[4676]: E0124 00:30:37.257131 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:30:40 crc kubenswrapper[4676]: I0124 00:30:40.533982 4676 scope.go:117] "RemoveContainer" containerID="b50dc415c434720b0458f8bff74066a1f5105e74beeb732a3d51cdf130cbe5d3" Jan 24 00:30:40 crc kubenswrapper[4676]: I0124 00:30:40.568318 4676 scope.go:117] "RemoveContainer" containerID="6301d47d4294644c70fd618b97e56c019270427fbdab822047ac3bfe88e7e594" Jan 24 00:30:52 crc kubenswrapper[4676]: I0124 00:30:52.256883 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:30:52 crc kubenswrapper[4676]: E0124 00:30:52.257695 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:30:58 crc kubenswrapper[4676]: I0124 00:30:58.078978 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ph986"] Jan 24 00:30:58 crc kubenswrapper[4676]: I0124 00:30:58.094435 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d7f6-account-create-update-brcr8"] Jan 24 00:30:58 crc kubenswrapper[4676]: I0124 00:30:58.103790 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ph986"] Jan 24 00:30:58 crc kubenswrapper[4676]: I0124 00:30:58.112181 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d7f6-account-create-update-brcr8"] Jan 24 00:30:58 crc kubenswrapper[4676]: I0124 00:30:58.273797 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925957f4-9ca1-4e46-b3ac-0dd0ca5713cd" path="/var/lib/kubelet/pods/925957f4-9ca1-4e46-b3ac-0dd0ca5713cd/volumes" Jan 24 00:30:58 crc kubenswrapper[4676]: I0124 00:30:58.275540 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c25d86-7dc6-4b15-9511-3a7cbf4c7592" path="/var/lib/kubelet/pods/98c25d86-7dc6-4b15-9511-3a7cbf4c7592/volumes" Jan 24 00:31:02 crc kubenswrapper[4676]: I0124 00:31:02.058540 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-x4ptm"] Jan 24 00:31:02 crc kubenswrapper[4676]: I0124 00:31:02.077277 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-60c3-account-create-update-xz4kb"] Jan 24 00:31:02 crc kubenswrapper[4676]: I0124 00:31:02.090754 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-x4ptm"] Jan 24 00:31:02 crc kubenswrapper[4676]: I0124 00:31:02.104557 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-60c3-account-create-update-xz4kb"] Jan 24 00:31:02 crc kubenswrapper[4676]: I0124 00:31:02.272295 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42" path="/var/lib/kubelet/pods/4ebbdd8e-9c69-4d2a-8f14-14bc152c1a42/volumes" Jan 24 00:31:02 crc kubenswrapper[4676]: I0124 00:31:02.274270 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6d8987f-6f37-4773-99ad-6bc7fd8971f2" path="/var/lib/kubelet/pods/d6d8987f-6f37-4773-99ad-6bc7fd8971f2/volumes" Jan 24 00:31:03 crc kubenswrapper[4676]: I0124 00:31:03.043233 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-prlkx"] Jan 24 00:31:03 crc kubenswrapper[4676]: I0124 00:31:03.055669 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-prlkx"] Jan 24 00:31:03 crc kubenswrapper[4676]: I0124 00:31:03.067556 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-278e-account-create-update-8fwkd"] Jan 24 00:31:03 crc kubenswrapper[4676]: I0124 00:31:03.074876 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-278e-account-create-update-8fwkd"] Jan 24 00:31:03 crc kubenswrapper[4676]: I0124 00:31:03.219120 4676 generic.go:334] "Generic (PLEG): container finished" podID="1fd4e1f4-0772-493a-b929-6e93470f9abf" containerID="990efd15c2bf2faf18318ad60df1588709f1e3dfd780fd07d20dad8fa6ff65f7" exitCode=0 Jan 24 00:31:03 crc kubenswrapper[4676]: I0124 00:31:03.219683 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" event={"ID":"1fd4e1f4-0772-493a-b929-6e93470f9abf","Type":"ContainerDied","Data":"990efd15c2bf2faf18318ad60df1588709f1e3dfd780fd07d20dad8fa6ff65f7"} Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.279457 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2374740d-d0b0-4a0c-b23e-d41d3ad524ac" path="/var/lib/kubelet/pods/2374740d-d0b0-4a0c-b23e-d41d3ad524ac/volumes" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.281540 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439dce27-2a59-49a2-beab-82482c1ce3cb" path="/var/lib/kubelet/pods/439dce27-2a59-49a2-beab-82482c1ce3cb/volumes" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.655480 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.760328 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cctcl\" (UniqueName: \"kubernetes.io/projected/1fd4e1f4-0772-493a-b929-6e93470f9abf-kube-api-access-cctcl\") pod \"1fd4e1f4-0772-493a-b929-6e93470f9abf\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.760397 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-bootstrap-combined-ca-bundle\") pod \"1fd4e1f4-0772-493a-b929-6e93470f9abf\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.760497 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-ssh-key-openstack-edpm-ipam\") pod \"1fd4e1f4-0772-493a-b929-6e93470f9abf\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.760538 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-inventory\") pod \"1fd4e1f4-0772-493a-b929-6e93470f9abf\" (UID: \"1fd4e1f4-0772-493a-b929-6e93470f9abf\") " Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.767118 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1fd4e1f4-0772-493a-b929-6e93470f9abf" (UID: "1fd4e1f4-0772-493a-b929-6e93470f9abf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.770542 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd4e1f4-0772-493a-b929-6e93470f9abf-kube-api-access-cctcl" (OuterVolumeSpecName: "kube-api-access-cctcl") pod "1fd4e1f4-0772-493a-b929-6e93470f9abf" (UID: "1fd4e1f4-0772-493a-b929-6e93470f9abf"). InnerVolumeSpecName "kube-api-access-cctcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.796157 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1fd4e1f4-0772-493a-b929-6e93470f9abf" (UID: "1fd4e1f4-0772-493a-b929-6e93470f9abf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.798557 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-inventory" (OuterVolumeSpecName: "inventory") pod "1fd4e1f4-0772-493a-b929-6e93470f9abf" (UID: "1fd4e1f4-0772-493a-b929-6e93470f9abf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.863674 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.863986 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.863997 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cctcl\" (UniqueName: \"kubernetes.io/projected/1fd4e1f4-0772-493a-b929-6e93470f9abf-kube-api-access-cctcl\") on node \"crc\" DevicePath \"\"" Jan 24 00:31:04 crc kubenswrapper[4676]: I0124 00:31:04.864008 4676 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd4e1f4-0772-493a-b929-6e93470f9abf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.242334 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" event={"ID":"1fd4e1f4-0772-493a-b929-6e93470f9abf","Type":"ContainerDied","Data":"b1076e008af5cd31c6b2d2ff588ee49398925df4037135eafcda7ef3f9b4a900"} Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.242403 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1076e008af5cd31c6b2d2ff588ee49398925df4037135eafcda7ef3f9b4a900" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.242432 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354248 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k"] Jan 24 00:31:05 crc kubenswrapper[4676]: E0124 00:31:05.354649 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerName="registry-server" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354662 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerName="registry-server" Jan 24 00:31:05 crc kubenswrapper[4676]: E0124 00:31:05.354672 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerName="extract-utilities" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354678 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerName="extract-utilities" Jan 24 00:31:05 crc kubenswrapper[4676]: E0124 00:31:05.354689 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerName="registry-server" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354695 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerName="registry-server" Jan 24 00:31:05 crc kubenswrapper[4676]: E0124 00:31:05.354703 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd4e1f4-0772-493a-b929-6e93470f9abf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354709 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd4e1f4-0772-493a-b929-6e93470f9abf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 00:31:05 crc kubenswrapper[4676]: E0124 00:31:05.354723 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerName="extract-utilities" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354731 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerName="extract-utilities" Jan 24 00:31:05 crc kubenswrapper[4676]: E0124 00:31:05.354750 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerName="extract-content" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354758 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerName="extract-content" Jan 24 00:31:05 crc kubenswrapper[4676]: E0124 00:31:05.354784 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerName="extract-content" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354792 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerName="extract-content" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.354980 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="df68cc49-9d8a-4c50-a379-35a2d56491e7" containerName="registry-server" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.355002 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f07f20b-f218-4f5b-9f8a-127ca3b44402" containerName="registry-server" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.355019 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd4e1f4-0772-493a-b929-6e93470f9abf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.355766 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.365186 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.365412 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.365559 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.365798 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.373166 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k"] Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.477207 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.477273 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.477310 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ck7\" (UniqueName: \"kubernetes.io/projected/7e20d58a-5f01-4c23-9ab0-650d3ae76844-kube-api-access-22ck7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.578557 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.578606 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.578645 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ck7\" (UniqueName: \"kubernetes.io/projected/7e20d58a-5f01-4c23-9ab0-650d3ae76844-kube-api-access-22ck7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.587477 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.587956 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.601001 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ck7\" (UniqueName: \"kubernetes.io/projected/7e20d58a-5f01-4c23-9ab0-650d3ae76844-kube-api-access-22ck7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-skh8k\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:05 crc kubenswrapper[4676]: I0124 00:31:05.683504 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:31:06 crc kubenswrapper[4676]: I0124 00:31:06.080300 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k"] Jan 24 00:31:06 crc kubenswrapper[4676]: I0124 00:31:06.254860 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" event={"ID":"7e20d58a-5f01-4c23-9ab0-650d3ae76844","Type":"ContainerStarted","Data":"d5ce814e830d10bf24b9a1a0312e8af78b1ad869f8900e1ffd1c55e849d5a89e"} Jan 24 00:31:06 crc kubenswrapper[4676]: I0124 00:31:06.262810 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:31:06 crc kubenswrapper[4676]: E0124 00:31:06.263162 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:31:07 crc kubenswrapper[4676]: I0124 00:31:07.268245 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" event={"ID":"7e20d58a-5f01-4c23-9ab0-650d3ae76844","Type":"ContainerStarted","Data":"a00ce8325a13556bd7fc250572ccaf4efe71b1b28dfcc05e56a4b357eae4bb8d"} Jan 24 00:31:19 crc kubenswrapper[4676]: I0124 00:31:19.039442 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" podStartSLOduration=13.478746551 podStartE2EDuration="14.039422559s" podCreationTimestamp="2026-01-24 00:31:05 +0000 UTC" firstStartedPulling="2026-01-24 00:31:06.094777178 +0000 UTC m=+1650.124748219" lastFinishedPulling="2026-01-24 00:31:06.655453226 +0000 UTC m=+1650.685424227" observedRunningTime="2026-01-24 00:31:07.290710578 +0000 UTC m=+1651.320681579" watchObservedRunningTime="2026-01-24 00:31:19.039422559 +0000 UTC m=+1663.069393580" Jan 24 00:31:19 crc kubenswrapper[4676]: I0124 00:31:19.041135 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-drjg7"] Jan 24 00:31:19 crc kubenswrapper[4676]: I0124 00:31:19.050732 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-drjg7"] Jan 24 00:31:19 crc kubenswrapper[4676]: I0124 00:31:19.255845 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:31:19 crc kubenswrapper[4676]: E0124 00:31:19.256059 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:31:20 crc kubenswrapper[4676]: I0124 00:31:20.266172 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20421570-a66e-4903-937f-af2ab847b5f1" path="/var/lib/kubelet/pods/20421570-a66e-4903-937f-af2ab847b5f1/volumes" Jan 24 00:31:28 crc kubenswrapper[4676]: I0124 00:31:28.043669 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kbtsc"] Jan 24 00:31:28 crc kubenswrapper[4676]: I0124 00:31:28.063499 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kbtsc"] Jan 24 00:31:28 crc kubenswrapper[4676]: I0124 00:31:28.273858 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ebdc261-7c0d-49a2-9233-868fda906788" path="/var/lib/kubelet/pods/7ebdc261-7c0d-49a2-9233-868fda906788/volumes" Jan 24 00:31:30 crc kubenswrapper[4676]: I0124 00:31:30.256574 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:31:30 crc kubenswrapper[4676]: E0124 00:31:30.260681 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.036068 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-bcttg"] Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.044706 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-bcttg"] Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.267938 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b488c3f8-bc04-4c8d-98b7-72145ae9e948" path="/var/lib/kubelet/pods/b488c3f8-bc04-4c8d-98b7-72145ae9e948/volumes" Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.692494 4676 scope.go:117] "RemoveContainer" containerID="1c08827f5b622dc0d2e57d1709fbff0d00bb4f69592994ac4ce101a373fb04cb" Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.784115 4676 scope.go:117] "RemoveContainer" containerID="b1e8f656f78cf6d79b93f73d693ca8f011b12dbd49906ddf8afe723c5036938e" Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.807831 4676 scope.go:117] "RemoveContainer" containerID="d5c280bb67bfb03306230881277a4f3cf73d70180a6643cdcdc6c29c26f5cdcb" Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.862466 4676 scope.go:117] "RemoveContainer" containerID="89ec371006cc8981cf4d3c50f26065f195057a546aba2faa19252efcd286b309" Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.897763 4676 scope.go:117] "RemoveContainer" containerID="a55ff63829260fcafaf5bd727871713740d9ed36c94c4fc218cf31c8bb4fa53b" Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.938841 4676 scope.go:117] "RemoveContainer" containerID="7b1df20731b13a3b81d6ecd68e39c0140b372b18913afe443cbe4c6b00f28ea4" Jan 24 00:31:40 crc kubenswrapper[4676]: I0124 00:31:40.982990 4676 scope.go:117] "RemoveContainer" containerID="8280acdb35983168359f77fef7c062161338aa47a5bf092b620c75a0dd83736f" Jan 24 00:31:41 crc kubenswrapper[4676]: I0124 00:31:41.016689 4676 scope.go:117] "RemoveContainer" containerID="a94f61fa91f9d6c0d5e0b0e07bb624e5cb428e8002a06af7cee89d3dcb90c4cd" Jan 24 00:31:41 crc kubenswrapper[4676]: I0124 00:31:41.046690 4676 scope.go:117] "RemoveContainer" containerID="3555dab39dcc24cb3d738eb810520147594f4722ea764e376bb893954691a29d" Jan 24 00:31:42 crc kubenswrapper[4676]: I0124 00:31:42.256526 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:31:42 crc kubenswrapper[4676]: E0124 00:31:42.257183 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.057695 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d058-account-create-update-pjzm7"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.076741 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4766-account-create-update-bbg95"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.088601 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-b4lc5"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.095733 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d058-account-create-update-pjzm7"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.102727 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ba07-account-create-update-hw2sx"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.110583 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-b4lc5"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.117060 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ba07-account-create-update-hw2sx"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.123140 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4766-account-create-update-bbg95"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.130157 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mtdlz"] Jan 24 00:31:45 crc kubenswrapper[4676]: I0124 00:31:45.136598 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mtdlz"] Jan 24 00:31:46 crc kubenswrapper[4676]: I0124 00:31:46.267478 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4fb3ef-0cde-499c-8018-14cc96b495f5" path="/var/lib/kubelet/pods/2d4fb3ef-0cde-499c-8018-14cc96b495f5/volumes" Jan 24 00:31:46 crc kubenswrapper[4676]: I0124 00:31:46.269578 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a07672-60fb-4935-83e2-99f39e15427f" path="/var/lib/kubelet/pods/64a07672-60fb-4935-83e2-99f39e15427f/volumes" Jan 24 00:31:46 crc kubenswrapper[4676]: I0124 00:31:46.270437 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64efcf33-9ffe-402d-b0b0-9cf53c7a5495" path="/var/lib/kubelet/pods/64efcf33-9ffe-402d-b0b0-9cf53c7a5495/volumes" Jan 24 00:31:46 crc kubenswrapper[4676]: I0124 00:31:46.271236 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ace7871-3944-4c72-980f-c9d5e7d65c71" path="/var/lib/kubelet/pods/9ace7871-3944-4c72-980f-c9d5e7d65c71/volumes" Jan 24 00:31:46 crc kubenswrapper[4676]: I0124 00:31:46.272485 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61f15c0-2fa1-4d69-bb80-471d6e4e7e09" path="/var/lib/kubelet/pods/a61f15c0-2fa1-4d69-bb80-471d6e4e7e09/volumes" Jan 24 00:31:50 crc kubenswrapper[4676]: I0124 00:31:50.056868 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wg6tc"] Jan 24 00:31:50 crc kubenswrapper[4676]: I0124 00:31:50.080548 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wg6tc"] Jan 24 00:31:50 crc kubenswrapper[4676]: I0124 00:31:50.268933 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30027bc7-2637-4abb-9568-1190cf80a3a5" path="/var/lib/kubelet/pods/30027bc7-2637-4abb-9568-1190cf80a3a5/volumes" Jan 24 00:31:53 crc kubenswrapper[4676]: I0124 00:31:53.256853 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:31:53 crc kubenswrapper[4676]: E0124 00:31:53.257590 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:32:07 crc kubenswrapper[4676]: I0124 00:32:07.255361 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:32:07 crc kubenswrapper[4676]: E0124 00:32:07.256137 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:32:22 crc kubenswrapper[4676]: I0124 00:32:22.255811 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:32:22 crc kubenswrapper[4676]: E0124 00:32:22.256827 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:32:25 crc kubenswrapper[4676]: I0124 00:32:25.054022 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bpbdz"] Jan 24 00:32:25 crc kubenswrapper[4676]: I0124 00:32:25.068150 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bpbdz"] Jan 24 00:32:26 crc kubenswrapper[4676]: I0124 00:32:26.266113 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c23d24-6fd8-4660-a0f7-815fdf508f5b" path="/var/lib/kubelet/pods/01c23d24-6fd8-4660-a0f7-815fdf508f5b/volumes" Jan 24 00:32:30 crc kubenswrapper[4676]: I0124 00:32:30.029749 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4kdvp"] Jan 24 00:32:30 crc kubenswrapper[4676]: I0124 00:32:30.036284 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4kdvp"] Jan 24 00:32:30 crc kubenswrapper[4676]: I0124 00:32:30.270528 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17fc5e6-983e-4678-b22c-c68686271163" path="/var/lib/kubelet/pods/c17fc5e6-983e-4678-b22c-c68686271163/volumes" Jan 24 00:32:35 crc kubenswrapper[4676]: I0124 00:32:35.051874 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l49lm"] Jan 24 00:32:35 crc kubenswrapper[4676]: I0124 00:32:35.062837 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l49lm"] Jan 24 00:32:36 crc kubenswrapper[4676]: I0124 00:32:36.024767 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g5dnq"] Jan 24 00:32:36 crc kubenswrapper[4676]: I0124 00:32:36.035145 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g5dnq"] Jan 24 00:32:36 crc kubenswrapper[4676]: I0124 00:32:36.274831 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04db8ba1-c0de-4985-8a48-ead625786472" path="/var/lib/kubelet/pods/04db8ba1-c0de-4985-8a48-ead625786472/volumes" Jan 24 00:32:36 crc kubenswrapper[4676]: I0124 00:32:36.275754 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7327fe7-179a-4492-8823-94b1067c17d4" path="/var/lib/kubelet/pods/a7327fe7-179a-4492-8823-94b1067c17d4/volumes" Jan 24 00:32:37 crc kubenswrapper[4676]: I0124 00:32:37.257149 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:32:37 crc kubenswrapper[4676]: E0124 00:32:37.257801 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.234762 4676 scope.go:117] "RemoveContainer" containerID="531b26a23fe05a6b22c6e35ea5ff32eb5bb18c34495f3ce046a990e3e2b684b5" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.258281 4676 scope.go:117] "RemoveContainer" containerID="192824bb8ad730af6c36357704c067406da5b951731ba1841c964e1a78d10bc2" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.303883 4676 scope.go:117] "RemoveContainer" containerID="47e15b8611d25a16d27aae65f29f85b602c09a46ed5cfeeea3914ac200a1d0a7" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.349796 4676 scope.go:117] "RemoveContainer" containerID="acb8fc5743669070f34cea9944d1cdf719e2212dfd00759a3d455ff74f1aa3b7" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.381311 4676 scope.go:117] "RemoveContainer" containerID="c87e9f17f624e372e522a3229d29135c0ab3c4953ce39a4a0c79e4002fe69658" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.453569 4676 scope.go:117] "RemoveContainer" containerID="9f879edfb9e19e2ba5d1f7acc73cb74d671c4cd334c91721768ab3acc3bba0e9" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.478415 4676 scope.go:117] "RemoveContainer" containerID="b062d260d4a7b754ef8f1768435c8c0945467373bcd6d26d2a060794a30067d3" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.503483 4676 scope.go:117] "RemoveContainer" containerID="2562adc9221a7b111bd58848de15d9abcd709528ca168c846292e16b34c5154e" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.528935 4676 scope.go:117] "RemoveContainer" containerID="c0f6b7c08c73c5cd3ba2389c70cde6261bfdc95261dc1f7fcb14c46b9908056d" Jan 24 00:32:41 crc kubenswrapper[4676]: I0124 00:32:41.553790 4676 scope.go:117] "RemoveContainer" containerID="bfee0ea90c79ca47b4f4a9473a613bbee236021e6a49db23a4490b8cde0e9a39" Jan 24 00:32:51 crc kubenswrapper[4676]: I0124 00:32:51.063979 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7klnv"] Jan 24 00:32:51 crc kubenswrapper[4676]: I0124 00:32:51.076462 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7klnv"] Jan 24 00:32:51 crc kubenswrapper[4676]: I0124 00:32:51.256712 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:32:51 crc kubenswrapper[4676]: E0124 00:32:51.256912 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:32:52 crc kubenswrapper[4676]: I0124 00:32:52.265450 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1163cc6-7ce1-4f2b-9d4a-f3e215177842" path="/var/lib/kubelet/pods/d1163cc6-7ce1-4f2b-9d4a-f3e215177842/volumes" Jan 24 00:33:05 crc kubenswrapper[4676]: I0124 00:33:05.926221 4676 generic.go:334] "Generic (PLEG): container finished" podID="7e20d58a-5f01-4c23-9ab0-650d3ae76844" containerID="a00ce8325a13556bd7fc250572ccaf4efe71b1b28dfcc05e56a4b357eae4bb8d" exitCode=0 Jan 24 00:33:05 crc kubenswrapper[4676]: I0124 00:33:05.926319 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" event={"ID":"7e20d58a-5f01-4c23-9ab0-650d3ae76844","Type":"ContainerDied","Data":"a00ce8325a13556bd7fc250572ccaf4efe71b1b28dfcc05e56a4b357eae4bb8d"} Jan 24 00:33:06 crc kubenswrapper[4676]: I0124 00:33:06.267697 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:33:06 crc kubenswrapper[4676]: E0124 00:33:06.267942 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.369570 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.488572 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-ssh-key-openstack-edpm-ipam\") pod \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.488860 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-inventory\") pod \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.488973 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ck7\" (UniqueName: \"kubernetes.io/projected/7e20d58a-5f01-4c23-9ab0-650d3ae76844-kube-api-access-22ck7\") pod \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\" (UID: \"7e20d58a-5f01-4c23-9ab0-650d3ae76844\") " Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.507985 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e20d58a-5f01-4c23-9ab0-650d3ae76844-kube-api-access-22ck7" (OuterVolumeSpecName: "kube-api-access-22ck7") pod "7e20d58a-5f01-4c23-9ab0-650d3ae76844" (UID: "7e20d58a-5f01-4c23-9ab0-650d3ae76844"). InnerVolumeSpecName "kube-api-access-22ck7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.525619 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-inventory" (OuterVolumeSpecName: "inventory") pod "7e20d58a-5f01-4c23-9ab0-650d3ae76844" (UID: "7e20d58a-5f01-4c23-9ab0-650d3ae76844"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.527881 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7e20d58a-5f01-4c23-9ab0-650d3ae76844" (UID: "7e20d58a-5f01-4c23-9ab0-650d3ae76844"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.591646 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.591688 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ck7\" (UniqueName: \"kubernetes.io/projected/7e20d58a-5f01-4c23-9ab0-650d3ae76844-kube-api-access-22ck7\") on node \"crc\" DevicePath \"\"" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.591703 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e20d58a-5f01-4c23-9ab0-650d3ae76844-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.951355 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" event={"ID":"7e20d58a-5f01-4c23-9ab0-650d3ae76844","Type":"ContainerDied","Data":"d5ce814e830d10bf24b9a1a0312e8af78b1ad869f8900e1ffd1c55e849d5a89e"} Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.951569 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5ce814e830d10bf24b9a1a0312e8af78b1ad869f8900e1ffd1c55e849d5a89e" Jan 24 00:33:07 crc kubenswrapper[4676]: I0124 00:33:07.951485 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-skh8k" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.086897 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg"] Jan 24 00:33:08 crc kubenswrapper[4676]: E0124 00:33:08.087328 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e20d58a-5f01-4c23-9ab0-650d3ae76844" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.087345 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e20d58a-5f01-4c23-9ab0-650d3ae76844" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.087564 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e20d58a-5f01-4c23-9ab0-650d3ae76844" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.088167 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.091725 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.091859 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.092050 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.092159 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.103663 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg"] Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.205694 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nq8\" (UniqueName: \"kubernetes.io/projected/7f552002-93ef-485f-9227-a94733534466-kube-api-access-d4nq8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.205894 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.206174 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.307524 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.307595 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nq8\" (UniqueName: \"kubernetes.io/projected/7f552002-93ef-485f-9227-a94733534466-kube-api-access-d4nq8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.307697 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.311367 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.315225 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.322338 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nq8\" (UniqueName: \"kubernetes.io/projected/7f552002-93ef-485f-9227-a94733534466-kube-api-access-d4nq8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:08 crc kubenswrapper[4676]: I0124 00:33:08.432438 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:33:09 crc kubenswrapper[4676]: I0124 00:33:09.055263 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg"] Jan 24 00:33:10 crc kubenswrapper[4676]: I0124 00:33:10.010426 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" event={"ID":"7f552002-93ef-485f-9227-a94733534466","Type":"ContainerStarted","Data":"ed3d716a27de10e6db253fd9bf5e4076a67003feba735058d5c8105d10b30d15"} Jan 24 00:33:11 crc kubenswrapper[4676]: I0124 00:33:11.021640 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" event={"ID":"7f552002-93ef-485f-9227-a94733534466","Type":"ContainerStarted","Data":"c5211d8770447c14431877d5448ca4697609d313cfeb7328a9986f02813d587d"} Jan 24 00:33:11 crc kubenswrapper[4676]: I0124 00:33:11.055001 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" podStartSLOduration=2.060294489 podStartE2EDuration="3.05389409s" podCreationTimestamp="2026-01-24 00:33:08 +0000 UTC" firstStartedPulling="2026-01-24 00:33:09.039054811 +0000 UTC m=+1773.069025822" lastFinishedPulling="2026-01-24 00:33:10.032654412 +0000 UTC m=+1774.062625423" observedRunningTime="2026-01-24 00:33:11.038854338 +0000 UTC m=+1775.068825379" watchObservedRunningTime="2026-01-24 00:33:11.05389409 +0000 UTC m=+1775.083865131" Jan 24 00:33:20 crc kubenswrapper[4676]: I0124 00:33:20.256538 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:33:20 crc kubenswrapper[4676]: E0124 00:33:20.257520 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:33:29 crc kubenswrapper[4676]: I0124 00:33:29.048218 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-95bf-account-create-update-7cvdb"] Jan 24 00:33:29 crc kubenswrapper[4676]: I0124 00:33:29.060613 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-95bf-account-create-update-7cvdb"] Jan 24 00:33:30 crc kubenswrapper[4676]: I0124 00:33:30.048033 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a963-account-create-update-jx4g6"] Jan 24 00:33:30 crc kubenswrapper[4676]: I0124 00:33:30.066012 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a963-account-create-update-jx4g6"] Jan 24 00:33:30 crc kubenswrapper[4676]: I0124 00:33:30.283483 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5f8ec9-ee62-4078-846c-291a47631ffb" path="/var/lib/kubelet/pods/2a5f8ec9-ee62-4078-846c-291a47631ffb/volumes" Jan 24 00:33:30 crc kubenswrapper[4676]: I0124 00:33:30.288999 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d47faf0-8eaa-474f-8cde-aed6c10f4a05" path="/var/lib/kubelet/pods/7d47faf0-8eaa-474f-8cde-aed6c10f4a05/volumes" Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.046871 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vtxzm"] Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.066995 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bpr95"] Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.076846 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-b7wzs"] Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.086835 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bpr95"] Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.095179 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vtxzm"] Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.103697 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-b7wzs"] Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.111161 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5f9c-account-create-update-9tmf6"] Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.117803 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5f9c-account-create-update-9tmf6"] Jan 24 00:33:31 crc kubenswrapper[4676]: I0124 00:33:31.274050 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:33:31 crc kubenswrapper[4676]: E0124 00:33:31.274781 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:33:32 crc kubenswrapper[4676]: I0124 00:33:32.275524 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792cc535-5ed4-4e06-a19c-31ba34c7dfc7" path="/var/lib/kubelet/pods/792cc535-5ed4-4e06-a19c-31ba34c7dfc7/volumes" Jan 24 00:33:32 crc kubenswrapper[4676]: I0124 00:33:32.276861 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd4cfd3-72ab-45d6-8683-85acb6cadf66" path="/var/lib/kubelet/pods/9cd4cfd3-72ab-45d6-8683-85acb6cadf66/volumes" Jan 24 00:33:32 crc kubenswrapper[4676]: I0124 00:33:32.278076 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd4662bb-5179-4f57-8571-5198dcf69bdb" path="/var/lib/kubelet/pods/bd4662bb-5179-4f57-8571-5198dcf69bdb/volumes" Jan 24 00:33:32 crc kubenswrapper[4676]: I0124 00:33:32.279233 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee552380-8d96-4a10-a5b8-2deb2e73b15f" path="/var/lib/kubelet/pods/ee552380-8d96-4a10-a5b8-2deb2e73b15f/volumes" Jan 24 00:33:41 crc kubenswrapper[4676]: I0124 00:33:41.752614 4676 scope.go:117] "RemoveContainer" containerID="cc3b8c014c9450ef0997c6ac2396003671e4bb4fb9c8343ce0debdf9d3796b15" Jan 24 00:33:41 crc kubenswrapper[4676]: I0124 00:33:41.781628 4676 scope.go:117] "RemoveContainer" containerID="e7fc18fa4adecb5d9d62a155b3da665c4c438818bc9b1324ec05947c45e02af0" Jan 24 00:33:41 crc kubenswrapper[4676]: I0124 00:33:41.828818 4676 scope.go:117] "RemoveContainer" containerID="3ee83666354e8c207624c3af1d827bd9c61811a7455e005083ff4e2e34ef0d81" Jan 24 00:33:41 crc kubenswrapper[4676]: I0124 00:33:41.876132 4676 scope.go:117] "RemoveContainer" containerID="e4299d6b00616548489e178e72141c6cc9961df799e192961e6eb82793cfee65" Jan 24 00:33:41 crc kubenswrapper[4676]: I0124 00:33:41.922548 4676 scope.go:117] "RemoveContainer" containerID="27297ea6827aaabd3e76dbca0980a960477f82e37ca6b6bb47dba9c165652738" Jan 24 00:33:41 crc kubenswrapper[4676]: I0124 00:33:41.967286 4676 scope.go:117] "RemoveContainer" containerID="848f0e8f0de2d0ecec1629bf16e2fb50f2be26fc04335b7c112ae212cbf9035a" Jan 24 00:33:42 crc kubenswrapper[4676]: I0124 00:33:42.017169 4676 scope.go:117] "RemoveContainer" containerID="0b76bccfcacd7fb1c879fa9b8c0e01b681d270afe951e54be9117c071c170135" Jan 24 00:33:45 crc kubenswrapper[4676]: I0124 00:33:45.256474 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:33:45 crc kubenswrapper[4676]: E0124 00:33:45.257268 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:33:58 crc kubenswrapper[4676]: I0124 00:33:58.257025 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:33:58 crc kubenswrapper[4676]: E0124 00:33:58.257829 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:34:05 crc kubenswrapper[4676]: I0124 00:34:05.057888 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptgmr"] Jan 24 00:34:05 crc kubenswrapper[4676]: I0124 00:34:05.070704 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptgmr"] Jan 24 00:34:06 crc kubenswrapper[4676]: I0124 00:34:06.269999 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff" path="/var/lib/kubelet/pods/9d3caecf-1bb3-49da-ab6a-76d7cf0e08ff/volumes" Jan 24 00:34:13 crc kubenswrapper[4676]: I0124 00:34:13.256572 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:34:13 crc kubenswrapper[4676]: E0124 00:34:13.257321 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:34:26 crc kubenswrapper[4676]: I0124 00:34:26.262954 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:34:26 crc kubenswrapper[4676]: E0124 00:34:26.264957 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:34:30 crc kubenswrapper[4676]: I0124 00:34:30.045409 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2cwbv"] Jan 24 00:34:30 crc kubenswrapper[4676]: I0124 00:34:30.052455 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fpw6c"] Jan 24 00:34:30 crc kubenswrapper[4676]: I0124 00:34:30.057485 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2cwbv"] Jan 24 00:34:30 crc kubenswrapper[4676]: I0124 00:34:30.065137 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fpw6c"] Jan 24 00:34:30 crc kubenswrapper[4676]: I0124 00:34:30.275285 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee94537-f601-4765-95cb-d56518fb7fc6" path="/var/lib/kubelet/pods/8ee94537-f601-4765-95cb-d56518fb7fc6/volumes" Jan 24 00:34:30 crc kubenswrapper[4676]: I0124 00:34:30.276126 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1b05e80-51fb-476f-a2b4-bf5a290ea5ae" path="/var/lib/kubelet/pods/a1b05e80-51fb-476f-a2b4-bf5a290ea5ae/volumes" Jan 24 00:34:36 crc kubenswrapper[4676]: I0124 00:34:36.850047 4676 generic.go:334] "Generic (PLEG): container finished" podID="7f552002-93ef-485f-9227-a94733534466" containerID="c5211d8770447c14431877d5448ca4697609d313cfeb7328a9986f02813d587d" exitCode=0 Jan 24 00:34:36 crc kubenswrapper[4676]: I0124 00:34:36.850230 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" event={"ID":"7f552002-93ef-485f-9227-a94733534466","Type":"ContainerDied","Data":"c5211d8770447c14431877d5448ca4697609d313cfeb7328a9986f02813d587d"} Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.255899 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:34:38 crc kubenswrapper[4676]: E0124 00:34:38.256656 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.283640 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.397612 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-ssh-key-openstack-edpm-ipam\") pod \"7f552002-93ef-485f-9227-a94733534466\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.398055 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-inventory\") pod \"7f552002-93ef-485f-9227-a94733534466\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.398131 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4nq8\" (UniqueName: \"kubernetes.io/projected/7f552002-93ef-485f-9227-a94733534466-kube-api-access-d4nq8\") pod \"7f552002-93ef-485f-9227-a94733534466\" (UID: \"7f552002-93ef-485f-9227-a94733534466\") " Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.406885 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f552002-93ef-485f-9227-a94733534466-kube-api-access-d4nq8" (OuterVolumeSpecName: "kube-api-access-d4nq8") pod "7f552002-93ef-485f-9227-a94733534466" (UID: "7f552002-93ef-485f-9227-a94733534466"). InnerVolumeSpecName "kube-api-access-d4nq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.422484 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f552002-93ef-485f-9227-a94733534466" (UID: "7f552002-93ef-485f-9227-a94733534466"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.427844 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-inventory" (OuterVolumeSpecName: "inventory") pod "7f552002-93ef-485f-9227-a94733534466" (UID: "7f552002-93ef-485f-9227-a94733534466"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.500110 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.500140 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4nq8\" (UniqueName: \"kubernetes.io/projected/7f552002-93ef-485f-9227-a94733534466-kube-api-access-d4nq8\") on node \"crc\" DevicePath \"\"" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.500152 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f552002-93ef-485f-9227-a94733534466-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.890523 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" event={"ID":"7f552002-93ef-485f-9227-a94733534466","Type":"ContainerDied","Data":"ed3d716a27de10e6db253fd9bf5e4076a67003feba735058d5c8105d10b30d15"} Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.890568 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3d716a27de10e6db253fd9bf5e4076a67003feba735058d5c8105d10b30d15" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.890602 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.992521 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd"] Jan 24 00:34:38 crc kubenswrapper[4676]: E0124 00:34:38.992934 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f552002-93ef-485f-9227-a94733534466" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.992949 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f552002-93ef-485f-9227-a94733534466" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.994539 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f552002-93ef-485f-9227-a94733534466" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 00:34:38 crc kubenswrapper[4676]: I0124 00:34:38.995308 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.003243 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.003399 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.003525 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.003627 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.012737 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd"] Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.110897 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.111217 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4r55\" (UniqueName: \"kubernetes.io/projected/83640815-cc06-4abe-a06f-20a1f8798609-kube-api-access-f4r55\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.111278 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.212606 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.212671 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4r55\" (UniqueName: \"kubernetes.io/projected/83640815-cc06-4abe-a06f-20a1f8798609-kube-api-access-f4r55\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.212724 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.217519 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.227993 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.238119 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4r55\" (UniqueName: \"kubernetes.io/projected/83640815-cc06-4abe-a06f-20a1f8798609-kube-api-access-f4r55\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.319202 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:39 crc kubenswrapper[4676]: I0124 00:34:39.920457 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd"] Jan 24 00:34:40 crc kubenswrapper[4676]: I0124 00:34:40.909099 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" event={"ID":"83640815-cc06-4abe-a06f-20a1f8798609","Type":"ContainerStarted","Data":"053dee6005bb4898c9263694d9cc519e8c586b5f9e5a44d6b717d52a218b5d54"} Jan 24 00:34:40 crc kubenswrapper[4676]: I0124 00:34:40.909352 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" event={"ID":"83640815-cc06-4abe-a06f-20a1f8798609","Type":"ContainerStarted","Data":"079bb0127e01a451f6f4f9fc2bbb1280917d9b125ded43402f6e87d34026dee7"} Jan 24 00:34:40 crc kubenswrapper[4676]: I0124 00:34:40.940248 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" podStartSLOduration=2.368930726 podStartE2EDuration="2.940229528s" podCreationTimestamp="2026-01-24 00:34:38 +0000 UTC" firstStartedPulling="2026-01-24 00:34:39.929708129 +0000 UTC m=+1863.959679140" lastFinishedPulling="2026-01-24 00:34:40.501006941 +0000 UTC m=+1864.530977942" observedRunningTime="2026-01-24 00:34:40.93672745 +0000 UTC m=+1864.966698451" watchObservedRunningTime="2026-01-24 00:34:40.940229528 +0000 UTC m=+1864.970200529" Jan 24 00:34:42 crc kubenswrapper[4676]: I0124 00:34:42.153325 4676 scope.go:117] "RemoveContainer" containerID="78da04e8326c020891ea05f676639495efe89b95f4c1821c5af31b8929259b96" Jan 24 00:34:42 crc kubenswrapper[4676]: I0124 00:34:42.183496 4676 scope.go:117] "RemoveContainer" containerID="1e6d788c12343033e8113f7b2adf05ab1c0f4bf8cb67dce88248d068873baa87" Jan 24 00:34:42 crc kubenswrapper[4676]: I0124 00:34:42.230630 4676 scope.go:117] "RemoveContainer" containerID="ba2f94acd9876a58363d3def84609810640e18145307211383bf62958eaadb26" Jan 24 00:34:46 crc kubenswrapper[4676]: I0124 00:34:46.969739 4676 generic.go:334] "Generic (PLEG): container finished" podID="83640815-cc06-4abe-a06f-20a1f8798609" containerID="053dee6005bb4898c9263694d9cc519e8c586b5f9e5a44d6b717d52a218b5d54" exitCode=0 Jan 24 00:34:46 crc kubenswrapper[4676]: I0124 00:34:46.969816 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" event={"ID":"83640815-cc06-4abe-a06f-20a1f8798609","Type":"ContainerDied","Data":"053dee6005bb4898c9263694d9cc519e8c586b5f9e5a44d6b717d52a218b5d54"} Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.381161 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.504049 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-ssh-key-openstack-edpm-ipam\") pod \"83640815-cc06-4abe-a06f-20a1f8798609\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.504144 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-inventory\") pod \"83640815-cc06-4abe-a06f-20a1f8798609\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.504265 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4r55\" (UniqueName: \"kubernetes.io/projected/83640815-cc06-4abe-a06f-20a1f8798609-kube-api-access-f4r55\") pod \"83640815-cc06-4abe-a06f-20a1f8798609\" (UID: \"83640815-cc06-4abe-a06f-20a1f8798609\") " Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.516419 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83640815-cc06-4abe-a06f-20a1f8798609-kube-api-access-f4r55" (OuterVolumeSpecName: "kube-api-access-f4r55") pod "83640815-cc06-4abe-a06f-20a1f8798609" (UID: "83640815-cc06-4abe-a06f-20a1f8798609"). InnerVolumeSpecName "kube-api-access-f4r55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.534409 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-inventory" (OuterVolumeSpecName: "inventory") pod "83640815-cc06-4abe-a06f-20a1f8798609" (UID: "83640815-cc06-4abe-a06f-20a1f8798609"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.539484 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83640815-cc06-4abe-a06f-20a1f8798609" (UID: "83640815-cc06-4abe-a06f-20a1f8798609"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.606488 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.606520 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83640815-cc06-4abe-a06f-20a1f8798609-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:34:48 crc kubenswrapper[4676]: I0124 00:34:48.606531 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4r55\" (UniqueName: \"kubernetes.io/projected/83640815-cc06-4abe-a06f-20a1f8798609-kube-api-access-f4r55\") on node \"crc\" DevicePath \"\"" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.033902 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" event={"ID":"83640815-cc06-4abe-a06f-20a1f8798609","Type":"ContainerDied","Data":"079bb0127e01a451f6f4f9fc2bbb1280917d9b125ded43402f6e87d34026dee7"} Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.033949 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079bb0127e01a451f6f4f9fc2bbb1280917d9b125ded43402f6e87d34026dee7" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.034078 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.108026 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c"] Jan 24 00:34:49 crc kubenswrapper[4676]: E0124 00:34:49.108404 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83640815-cc06-4abe-a06f-20a1f8798609" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.108422 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="83640815-cc06-4abe-a06f-20a1f8798609" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.108730 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="83640815-cc06-4abe-a06f-20a1f8798609" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.109644 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c"] Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.109740 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.116242 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.116485 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.116573 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.116612 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.219326 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.219395 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrjdn\" (UniqueName: \"kubernetes.io/projected/15fbef8d-e606-4b72-a994-e71d03e8fec8-kube-api-access-qrjdn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.219494 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.321441 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.322183 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.322281 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrjdn\" (UniqueName: \"kubernetes.io/projected/15fbef8d-e606-4b72-a994-e71d03e8fec8-kube-api-access-qrjdn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.329327 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.330935 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.339066 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrjdn\" (UniqueName: \"kubernetes.io/projected/15fbef8d-e606-4b72-a994-e71d03e8fec8-kube-api-access-qrjdn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz24c\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:49 crc kubenswrapper[4676]: I0124 00:34:49.438839 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:34:50 crc kubenswrapper[4676]: I0124 00:34:50.002837 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c"] Jan 24 00:34:50 crc kubenswrapper[4676]: I0124 00:34:50.045152 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" event={"ID":"15fbef8d-e606-4b72-a994-e71d03e8fec8","Type":"ContainerStarted","Data":"7ba6922eccbf3af44bee7c474d3f94451469a6bfb3124c944eb7ac801e95824b"} Jan 24 00:34:51 crc kubenswrapper[4676]: I0124 00:34:51.056027 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" event={"ID":"15fbef8d-e606-4b72-a994-e71d03e8fec8","Type":"ContainerStarted","Data":"f5244c90e6a886fde151ab47a266d37ba45ab65855478b59a54b16449bd8cac9"} Jan 24 00:34:51 crc kubenswrapper[4676]: I0124 00:34:51.081977 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" podStartSLOduration=1.610067049 podStartE2EDuration="2.081951506s" podCreationTimestamp="2026-01-24 00:34:49 +0000 UTC" firstStartedPulling="2026-01-24 00:34:50.012677117 +0000 UTC m=+1874.042648158" lastFinishedPulling="2026-01-24 00:34:50.484561614 +0000 UTC m=+1874.514532615" observedRunningTime="2026-01-24 00:34:51.07489861 +0000 UTC m=+1875.104869631" watchObservedRunningTime="2026-01-24 00:34:51.081951506 +0000 UTC m=+1875.111922517" Jan 24 00:34:51 crc kubenswrapper[4676]: I0124 00:34:51.256456 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:34:51 crc kubenswrapper[4676]: E0124 00:34:51.256758 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:35:06 crc kubenswrapper[4676]: I0124 00:35:06.270109 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:35:06 crc kubenswrapper[4676]: E0124 00:35:06.270738 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:35:14 crc kubenswrapper[4676]: I0124 00:35:14.041994 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-49926"] Jan 24 00:35:14 crc kubenswrapper[4676]: I0124 00:35:14.051819 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-49926"] Jan 24 00:35:14 crc kubenswrapper[4676]: I0124 00:35:14.265700 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e6d63aa-e36a-4ef7-b50d-255b44e72c20" path="/var/lib/kubelet/pods/2e6d63aa-e36a-4ef7-b50d-255b44e72c20/volumes" Jan 24 00:35:17 crc kubenswrapper[4676]: I0124 00:35:17.255174 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:35:18 crc kubenswrapper[4676]: I0124 00:35:18.309852 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"658098ecc7ebeb43955aee4f3317cbcc452c89ec3e2a5ddc24ed196ea90b98d2"} Jan 24 00:35:37 crc kubenswrapper[4676]: I0124 00:35:37.486252 4676 generic.go:334] "Generic (PLEG): container finished" podID="15fbef8d-e606-4b72-a994-e71d03e8fec8" containerID="f5244c90e6a886fde151ab47a266d37ba45ab65855478b59a54b16449bd8cac9" exitCode=0 Jan 24 00:35:37 crc kubenswrapper[4676]: I0124 00:35:37.486355 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" event={"ID":"15fbef8d-e606-4b72-a994-e71d03e8fec8","Type":"ContainerDied","Data":"f5244c90e6a886fde151ab47a266d37ba45ab65855478b59a54b16449bd8cac9"} Jan 24 00:35:38 crc kubenswrapper[4676]: I0124 00:35:38.898974 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.067116 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrjdn\" (UniqueName: \"kubernetes.io/projected/15fbef8d-e606-4b72-a994-e71d03e8fec8-kube-api-access-qrjdn\") pod \"15fbef8d-e606-4b72-a994-e71d03e8fec8\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.067183 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-ssh-key-openstack-edpm-ipam\") pod \"15fbef8d-e606-4b72-a994-e71d03e8fec8\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.067365 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-inventory\") pod \"15fbef8d-e606-4b72-a994-e71d03e8fec8\" (UID: \"15fbef8d-e606-4b72-a994-e71d03e8fec8\") " Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.072912 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fbef8d-e606-4b72-a994-e71d03e8fec8-kube-api-access-qrjdn" (OuterVolumeSpecName: "kube-api-access-qrjdn") pod "15fbef8d-e606-4b72-a994-e71d03e8fec8" (UID: "15fbef8d-e606-4b72-a994-e71d03e8fec8"). InnerVolumeSpecName "kube-api-access-qrjdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.100634 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-inventory" (OuterVolumeSpecName: "inventory") pod "15fbef8d-e606-4b72-a994-e71d03e8fec8" (UID: "15fbef8d-e606-4b72-a994-e71d03e8fec8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.108626 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15fbef8d-e606-4b72-a994-e71d03e8fec8" (UID: "15fbef8d-e606-4b72-a994-e71d03e8fec8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.170161 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrjdn\" (UniqueName: \"kubernetes.io/projected/15fbef8d-e606-4b72-a994-e71d03e8fec8-kube-api-access-qrjdn\") on node \"crc\" DevicePath \"\"" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.170231 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.170246 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15fbef8d-e606-4b72-a994-e71d03e8fec8-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.509102 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" event={"ID":"15fbef8d-e606-4b72-a994-e71d03e8fec8","Type":"ContainerDied","Data":"7ba6922eccbf3af44bee7c474d3f94451469a6bfb3124c944eb7ac801e95824b"} Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.509158 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba6922eccbf3af44bee7c474d3f94451469a6bfb3124c944eb7ac801e95824b" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.509182 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz24c" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.690089 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7"] Jan 24 00:35:39 crc kubenswrapper[4676]: E0124 00:35:39.690782 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fbef8d-e606-4b72-a994-e71d03e8fec8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.690816 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fbef8d-e606-4b72-a994-e71d03e8fec8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.691250 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fbef8d-e606-4b72-a994-e71d03e8fec8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.692102 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.695273 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.698274 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.702452 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.705302 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.724077 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7"] Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.883802 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.883905 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.883934 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x6b4\" (UniqueName: \"kubernetes.io/projected/f6bc5ee4-f730-4e1e-9684-b643daed2519-kube-api-access-9x6b4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.986351 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.986473 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.986511 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6b4\" (UniqueName: \"kubernetes.io/projected/f6bc5ee4-f730-4e1e-9684-b643daed2519-kube-api-access-9x6b4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.994229 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:39 crc kubenswrapper[4676]: I0124 00:35:39.995962 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:40 crc kubenswrapper[4676]: I0124 00:35:40.004290 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6b4\" (UniqueName: \"kubernetes.io/projected/f6bc5ee4-f730-4e1e-9684-b643daed2519-kube-api-access-9x6b4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:40 crc kubenswrapper[4676]: I0124 00:35:40.018267 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:35:40 crc kubenswrapper[4676]: I0124 00:35:40.611853 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7"] Jan 24 00:35:40 crc kubenswrapper[4676]: I0124 00:35:40.619846 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:35:41 crc kubenswrapper[4676]: I0124 00:35:41.525573 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" event={"ID":"f6bc5ee4-f730-4e1e-9684-b643daed2519","Type":"ContainerStarted","Data":"4665658c6d6e687af3b155f8c6276554ef2994904fa73e918324ef0a895dab86"} Jan 24 00:35:41 crc kubenswrapper[4676]: I0124 00:35:41.525853 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" event={"ID":"f6bc5ee4-f730-4e1e-9684-b643daed2519","Type":"ContainerStarted","Data":"9f8fc974d6952d15547a94a1cd02cba6b82b321eee0cdb4aa97b6835f688c7ba"} Jan 24 00:35:42 crc kubenswrapper[4676]: I0124 00:35:42.367713 4676 scope.go:117] "RemoveContainer" containerID="d43689494700a5ceee020f09f4ca2f2fd150c60e2e3c7e0d6cd30145e29f275e" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.708843 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" podStartSLOduration=19.055910678 podStartE2EDuration="19.708822753s" podCreationTimestamp="2026-01-24 00:35:39 +0000 UTC" firstStartedPulling="2026-01-24 00:35:40.619577457 +0000 UTC m=+1924.649548468" lastFinishedPulling="2026-01-24 00:35:41.272489542 +0000 UTC m=+1925.302460543" observedRunningTime="2026-01-24 00:35:41.546792129 +0000 UTC m=+1925.576763130" watchObservedRunningTime="2026-01-24 00:35:58.708822753 +0000 UTC m=+1942.738793754" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.716989 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pjnlx"] Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.721137 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.731835 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjnlx"] Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.772552 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txfsm\" (UniqueName: \"kubernetes.io/projected/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-kube-api-access-txfsm\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.772650 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-utilities\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.772682 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-catalog-content\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.873836 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-utilities\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.874063 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-catalog-content\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.874241 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txfsm\" (UniqueName: \"kubernetes.io/projected/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-kube-api-access-txfsm\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.874411 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-utilities\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.874484 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-catalog-content\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:58 crc kubenswrapper[4676]: I0124 00:35:58.892126 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txfsm\" (UniqueName: \"kubernetes.io/projected/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-kube-api-access-txfsm\") pod \"community-operators-pjnlx\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:59 crc kubenswrapper[4676]: I0124 00:35:59.052741 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:35:59 crc kubenswrapper[4676]: I0124 00:35:59.352088 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjnlx"] Jan 24 00:35:59 crc kubenswrapper[4676]: I0124 00:35:59.670095 4676 generic.go:334] "Generic (PLEG): container finished" podID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerID="3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66" exitCode=0 Jan 24 00:35:59 crc kubenswrapper[4676]: I0124 00:35:59.670143 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjnlx" event={"ID":"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0","Type":"ContainerDied","Data":"3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66"} Jan 24 00:35:59 crc kubenswrapper[4676]: I0124 00:35:59.670356 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjnlx" event={"ID":"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0","Type":"ContainerStarted","Data":"d0751a2e4fafb0354605e902bc985b1ecae83ff901be37db738f0802bd1d6271"} Jan 24 00:36:01 crc kubenswrapper[4676]: I0124 00:36:01.694231 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjnlx" event={"ID":"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0","Type":"ContainerStarted","Data":"bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b"} Jan 24 00:36:02 crc kubenswrapper[4676]: I0124 00:36:02.704619 4676 generic.go:334] "Generic (PLEG): container finished" podID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerID="bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b" exitCode=0 Jan 24 00:36:02 crc kubenswrapper[4676]: I0124 00:36:02.704689 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjnlx" event={"ID":"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0","Type":"ContainerDied","Data":"bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b"} Jan 24 00:36:03 crc kubenswrapper[4676]: I0124 00:36:03.721963 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjnlx" event={"ID":"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0","Type":"ContainerStarted","Data":"fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b"} Jan 24 00:36:04 crc kubenswrapper[4676]: I0124 00:36:04.752911 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pjnlx" podStartSLOduration=2.922483357 podStartE2EDuration="6.752895155s" podCreationTimestamp="2026-01-24 00:35:58 +0000 UTC" firstStartedPulling="2026-01-24 00:35:59.671875157 +0000 UTC m=+1943.701846158" lastFinishedPulling="2026-01-24 00:36:03.502286955 +0000 UTC m=+1947.532257956" observedRunningTime="2026-01-24 00:36:04.747693015 +0000 UTC m=+1948.777664016" watchObservedRunningTime="2026-01-24 00:36:04.752895155 +0000 UTC m=+1948.782866156" Jan 24 00:36:09 crc kubenswrapper[4676]: I0124 00:36:09.054065 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:36:09 crc kubenswrapper[4676]: I0124 00:36:09.055445 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:36:09 crc kubenswrapper[4676]: I0124 00:36:09.097814 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:36:09 crc kubenswrapper[4676]: I0124 00:36:09.863108 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:36:09 crc kubenswrapper[4676]: I0124 00:36:09.940065 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjnlx"] Jan 24 00:36:11 crc kubenswrapper[4676]: I0124 00:36:11.797128 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pjnlx" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerName="registry-server" containerID="cri-o://fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b" gracePeriod=2 Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.292115 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.442201 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-utilities\") pod \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.442255 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txfsm\" (UniqueName: \"kubernetes.io/projected/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-kube-api-access-txfsm\") pod \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.442494 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-catalog-content\") pod \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\" (UID: \"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0\") " Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.443907 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-utilities" (OuterVolumeSpecName: "utilities") pod "34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" (UID: "34a715a4-3f3f-49f7-baa8-14d0ad60b0d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.452562 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-kube-api-access-txfsm" (OuterVolumeSpecName: "kube-api-access-txfsm") pod "34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" (UID: "34a715a4-3f3f-49f7-baa8-14d0ad60b0d0"). InnerVolumeSpecName "kube-api-access-txfsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.492706 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" (UID: "34a715a4-3f3f-49f7-baa8-14d0ad60b0d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.544577 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.544611 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txfsm\" (UniqueName: \"kubernetes.io/projected/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-kube-api-access-txfsm\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.544625 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.810974 4676 generic.go:334] "Generic (PLEG): container finished" podID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerID="fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b" exitCode=0 Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.811016 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjnlx" event={"ID":"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0","Type":"ContainerDied","Data":"fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b"} Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.811043 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjnlx" event={"ID":"34a715a4-3f3f-49f7-baa8-14d0ad60b0d0","Type":"ContainerDied","Data":"d0751a2e4fafb0354605e902bc985b1ecae83ff901be37db738f0802bd1d6271"} Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.811061 4676 scope.go:117] "RemoveContainer" containerID="fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.811102 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjnlx" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.849888 4676 scope.go:117] "RemoveContainer" containerID="bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.878084 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjnlx"] Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.887807 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pjnlx"] Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.956355 4676 scope.go:117] "RemoveContainer" containerID="3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.985019 4676 scope.go:117] "RemoveContainer" containerID="fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b" Jan 24 00:36:12 crc kubenswrapper[4676]: E0124 00:36:12.985459 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b\": container with ID starting with fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b not found: ID does not exist" containerID="fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.985575 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b"} err="failed to get container status \"fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b\": rpc error: code = NotFound desc = could not find container \"fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b\": container with ID starting with fab27611b7a817d4706a36e474f0e3b4f9ac17514428f2841dc2f9a592827a3b not found: ID does not exist" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.985666 4676 scope.go:117] "RemoveContainer" containerID="bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b" Jan 24 00:36:12 crc kubenswrapper[4676]: E0124 00:36:12.986003 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b\": container with ID starting with bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b not found: ID does not exist" containerID="bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.986093 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b"} err="failed to get container status \"bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b\": rpc error: code = NotFound desc = could not find container \"bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b\": container with ID starting with bf62ab74165fb7568f822f08e9f28b2f740617c77f920d1ed10bb7f56635302b not found: ID does not exist" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.986182 4676 scope.go:117] "RemoveContainer" containerID="3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66" Jan 24 00:36:12 crc kubenswrapper[4676]: E0124 00:36:12.986493 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66\": container with ID starting with 3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66 not found: ID does not exist" containerID="3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66" Jan 24 00:36:12 crc kubenswrapper[4676]: I0124 00:36:12.986570 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66"} err="failed to get container status \"3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66\": rpc error: code = NotFound desc = could not find container \"3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66\": container with ID starting with 3fc10800e2e755474740e8bb060d4da8fa052be7ca48e286e7371b206cd02d66 not found: ID does not exist" Jan 24 00:36:14 crc kubenswrapper[4676]: I0124 00:36:14.271405 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" path="/var/lib/kubelet/pods/34a715a4-3f3f-49f7-baa8-14d0ad60b0d0/volumes" Jan 24 00:36:44 crc kubenswrapper[4676]: I0124 00:36:44.120062 4676 generic.go:334] "Generic (PLEG): container finished" podID="f6bc5ee4-f730-4e1e-9684-b643daed2519" containerID="4665658c6d6e687af3b155f8c6276554ef2994904fa73e918324ef0a895dab86" exitCode=0 Jan 24 00:36:44 crc kubenswrapper[4676]: I0124 00:36:44.120141 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" event={"ID":"f6bc5ee4-f730-4e1e-9684-b643daed2519","Type":"ContainerDied","Data":"4665658c6d6e687af3b155f8c6276554ef2994904fa73e918324ef0a895dab86"} Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.562246 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.617252 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-ssh-key-openstack-edpm-ipam\") pod \"f6bc5ee4-f730-4e1e-9684-b643daed2519\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.617328 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x6b4\" (UniqueName: \"kubernetes.io/projected/f6bc5ee4-f730-4e1e-9684-b643daed2519-kube-api-access-9x6b4\") pod \"f6bc5ee4-f730-4e1e-9684-b643daed2519\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.617517 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-inventory\") pod \"f6bc5ee4-f730-4e1e-9684-b643daed2519\" (UID: \"f6bc5ee4-f730-4e1e-9684-b643daed2519\") " Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.623603 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bc5ee4-f730-4e1e-9684-b643daed2519-kube-api-access-9x6b4" (OuterVolumeSpecName: "kube-api-access-9x6b4") pod "f6bc5ee4-f730-4e1e-9684-b643daed2519" (UID: "f6bc5ee4-f730-4e1e-9684-b643daed2519"). InnerVolumeSpecName "kube-api-access-9x6b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.651359 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f6bc5ee4-f730-4e1e-9684-b643daed2519" (UID: "f6bc5ee4-f730-4e1e-9684-b643daed2519"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.654218 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-inventory" (OuterVolumeSpecName: "inventory") pod "f6bc5ee4-f730-4e1e-9684-b643daed2519" (UID: "f6bc5ee4-f730-4e1e-9684-b643daed2519"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.720016 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.720050 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x6b4\" (UniqueName: \"kubernetes.io/projected/f6bc5ee4-f730-4e1e-9684-b643daed2519-kube-api-access-9x6b4\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:45 crc kubenswrapper[4676]: I0124 00:36:45.720065 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc5ee4-f730-4e1e-9684-b643daed2519-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.147031 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" event={"ID":"f6bc5ee4-f730-4e1e-9684-b643daed2519","Type":"ContainerDied","Data":"9f8fc974d6952d15547a94a1cd02cba6b82b321eee0cdb4aa97b6835f688c7ba"} Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.147627 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f8fc974d6952d15547a94a1cd02cba6b82b321eee0cdb4aa97b6835f688c7ba" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.147127 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.277154 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8b9rz"] Jan 24 00:36:46 crc kubenswrapper[4676]: E0124 00:36:46.277570 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerName="extract-content" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.277590 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerName="extract-content" Jan 24 00:36:46 crc kubenswrapper[4676]: E0124 00:36:46.277631 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6bc5ee4-f730-4e1e-9684-b643daed2519" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.277642 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bc5ee4-f730-4e1e-9684-b643daed2519" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:36:46 crc kubenswrapper[4676]: E0124 00:36:46.277690 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerName="extract-utilities" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.277700 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerName="extract-utilities" Jan 24 00:36:46 crc kubenswrapper[4676]: E0124 00:36:46.277716 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerName="registry-server" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.277725 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerName="registry-server" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.277949 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6bc5ee4-f730-4e1e-9684-b643daed2519" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.277982 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a715a4-3f3f-49f7-baa8-14d0ad60b0d0" containerName="registry-server" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.278667 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.288480 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.288513 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.288720 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.289080 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.297685 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8b9rz"] Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.331520 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.331558 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.331632 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9z29\" (UniqueName: \"kubernetes.io/projected/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-kube-api-access-d9z29\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.432681 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.432724 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.432797 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9z29\" (UniqueName: \"kubernetes.io/projected/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-kube-api-access-d9z29\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.436453 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.437601 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.454174 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9z29\" (UniqueName: \"kubernetes.io/projected/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-kube-api-access-d9z29\") pod \"ssh-known-hosts-edpm-deployment-8b9rz\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:46 crc kubenswrapper[4676]: I0124 00:36:46.631863 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:47 crc kubenswrapper[4676]: I0124 00:36:47.197425 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8b9rz"] Jan 24 00:36:48 crc kubenswrapper[4676]: I0124 00:36:48.168602 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" event={"ID":"d1ed73dc-4392-4d20-a592-4a8c5ba9c104","Type":"ContainerStarted","Data":"e9b15508842a6e3843219b6f379d487b2bda8b91380d62eec7af7fd65f163e7e"} Jan 24 00:36:48 crc kubenswrapper[4676]: I0124 00:36:48.168953 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" event={"ID":"d1ed73dc-4392-4d20-a592-4a8c5ba9c104","Type":"ContainerStarted","Data":"d82373d13c1684a8d66f88055ba4347b10bef0e1909b0ca7e44bf1021f68c9e1"} Jan 24 00:36:48 crc kubenswrapper[4676]: I0124 00:36:48.190543 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" podStartSLOduration=1.726424199 podStartE2EDuration="2.190522376s" podCreationTimestamp="2026-01-24 00:36:46 +0000 UTC" firstStartedPulling="2026-01-24 00:36:47.206867567 +0000 UTC m=+1991.236838558" lastFinishedPulling="2026-01-24 00:36:47.670965724 +0000 UTC m=+1991.700936735" observedRunningTime="2026-01-24 00:36:48.185744949 +0000 UTC m=+1992.215716000" watchObservedRunningTime="2026-01-24 00:36:48.190522376 +0000 UTC m=+1992.220493397" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.211032 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-twx92"] Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.218210 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.273542 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twx92"] Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.310358 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlrz\" (UniqueName: \"kubernetes.io/projected/a3e85f87-765d-44bb-9187-6e5276434b05-kube-api-access-fxlrz\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.310649 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-utilities\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.310739 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-catalog-content\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.411864 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxlrz\" (UniqueName: \"kubernetes.io/projected/a3e85f87-765d-44bb-9187-6e5276434b05-kube-api-access-fxlrz\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.412115 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-utilities\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.412225 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-catalog-content\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.412614 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-catalog-content\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.412706 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-utilities\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.438762 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxlrz\" (UniqueName: \"kubernetes.io/projected/a3e85f87-765d-44bb-9187-6e5276434b05-kube-api-access-fxlrz\") pod \"redhat-operators-twx92\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:54 crc kubenswrapper[4676]: I0124 00:36:54.564448 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:36:55 crc kubenswrapper[4676]: I0124 00:36:55.238625 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twx92"] Jan 24 00:36:55 crc kubenswrapper[4676]: I0124 00:36:55.253179 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twx92" event={"ID":"a3e85f87-765d-44bb-9187-6e5276434b05","Type":"ContainerStarted","Data":"12f78548df580a50ff02c0e908a128c3171c6cd389e2bfb1762c569bef21f86a"} Jan 24 00:36:56 crc kubenswrapper[4676]: I0124 00:36:56.271689 4676 generic.go:334] "Generic (PLEG): container finished" podID="a3e85f87-765d-44bb-9187-6e5276434b05" containerID="c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109" exitCode=0 Jan 24 00:36:56 crc kubenswrapper[4676]: I0124 00:36:56.272038 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twx92" event={"ID":"a3e85f87-765d-44bb-9187-6e5276434b05","Type":"ContainerDied","Data":"c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109"} Jan 24 00:36:56 crc kubenswrapper[4676]: I0124 00:36:56.275617 4676 generic.go:334] "Generic (PLEG): container finished" podID="d1ed73dc-4392-4d20-a592-4a8c5ba9c104" containerID="e9b15508842a6e3843219b6f379d487b2bda8b91380d62eec7af7fd65f163e7e" exitCode=0 Jan 24 00:36:56 crc kubenswrapper[4676]: I0124 00:36:56.275670 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" event={"ID":"d1ed73dc-4392-4d20-a592-4a8c5ba9c104","Type":"ContainerDied","Data":"e9b15508842a6e3843219b6f379d487b2bda8b91380d62eec7af7fd65f163e7e"} Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.286248 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twx92" event={"ID":"a3e85f87-765d-44bb-9187-6e5276434b05","Type":"ContainerStarted","Data":"7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c"} Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.756670 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.768327 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-inventory-0\") pod \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.768479 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9z29\" (UniqueName: \"kubernetes.io/projected/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-kube-api-access-d9z29\") pod \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.768607 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-ssh-key-openstack-edpm-ipam\") pod \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\" (UID: \"d1ed73dc-4392-4d20-a592-4a8c5ba9c104\") " Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.778565 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-kube-api-access-d9z29" (OuterVolumeSpecName: "kube-api-access-d9z29") pod "d1ed73dc-4392-4d20-a592-4a8c5ba9c104" (UID: "d1ed73dc-4392-4d20-a592-4a8c5ba9c104"). InnerVolumeSpecName "kube-api-access-d9z29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.820861 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d1ed73dc-4392-4d20-a592-4a8c5ba9c104" (UID: "d1ed73dc-4392-4d20-a592-4a8c5ba9c104"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.822541 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1ed73dc-4392-4d20-a592-4a8c5ba9c104" (UID: "d1ed73dc-4392-4d20-a592-4a8c5ba9c104"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.870504 4676 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.870535 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9z29\" (UniqueName: \"kubernetes.io/projected/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-kube-api-access-d9z29\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:57 crc kubenswrapper[4676]: I0124 00:36:57.870547 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ed73dc-4392-4d20-a592-4a8c5ba9c104-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.299195 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" event={"ID":"d1ed73dc-4392-4d20-a592-4a8c5ba9c104","Type":"ContainerDied","Data":"d82373d13c1684a8d66f88055ba4347b10bef0e1909b0ca7e44bf1021f68c9e1"} Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.299249 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d82373d13c1684a8d66f88055ba4347b10bef0e1909b0ca7e44bf1021f68c9e1" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.299412 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8b9rz" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.422175 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn"] Jan 24 00:36:58 crc kubenswrapper[4676]: E0124 00:36:58.424264 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ed73dc-4392-4d20-a592-4a8c5ba9c104" containerName="ssh-known-hosts-edpm-deployment" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.424358 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ed73dc-4392-4d20-a592-4a8c5ba9c104" containerName="ssh-known-hosts-edpm-deployment" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.424676 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ed73dc-4392-4d20-a592-4a8c5ba9c104" containerName="ssh-known-hosts-edpm-deployment" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.425397 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.428824 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.429097 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.431299 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.431448 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.439055 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn"] Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.583754 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.583840 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.583932 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m52h\" (UniqueName: \"kubernetes.io/projected/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-kube-api-access-9m52h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.686247 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.686353 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.686452 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m52h\" (UniqueName: \"kubernetes.io/projected/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-kube-api-access-9m52h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.711176 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.713433 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.713542 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m52h\" (UniqueName: \"kubernetes.io/projected/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-kube-api-access-9m52h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mhrvn\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:58 crc kubenswrapper[4676]: I0124 00:36:58.747948 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:36:59 crc kubenswrapper[4676]: I0124 00:36:59.331951 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn"] Jan 24 00:37:00 crc kubenswrapper[4676]: I0124 00:37:00.314112 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" event={"ID":"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1","Type":"ContainerStarted","Data":"084b4e0ab44d55e490dea686f2dbac622f75b83ccd35b40989533f2257cebbae"} Jan 24 00:37:01 crc kubenswrapper[4676]: I0124 00:37:01.326785 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" event={"ID":"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1","Type":"ContainerStarted","Data":"656d4d2d5ff068e82b90b5981ff0e141ca2026de96cf3e479d6668c7e1b25ad7"} Jan 24 00:37:01 crc kubenswrapper[4676]: I0124 00:37:01.329721 4676 generic.go:334] "Generic (PLEG): container finished" podID="a3e85f87-765d-44bb-9187-6e5276434b05" containerID="7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c" exitCode=0 Jan 24 00:37:01 crc kubenswrapper[4676]: I0124 00:37:01.329769 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twx92" event={"ID":"a3e85f87-765d-44bb-9187-6e5276434b05","Type":"ContainerDied","Data":"7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c"} Jan 24 00:37:01 crc kubenswrapper[4676]: I0124 00:37:01.364589 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" podStartSLOduration=1.887540129 podStartE2EDuration="3.364570134s" podCreationTimestamp="2026-01-24 00:36:58 +0000 UTC" firstStartedPulling="2026-01-24 00:36:59.334825407 +0000 UTC m=+2003.364796408" lastFinishedPulling="2026-01-24 00:37:00.811855412 +0000 UTC m=+2004.841826413" observedRunningTime="2026-01-24 00:37:01.354952079 +0000 UTC m=+2005.384923100" watchObservedRunningTime="2026-01-24 00:37:01.364570134 +0000 UTC m=+2005.394541135" Jan 24 00:37:02 crc kubenswrapper[4676]: I0124 00:37:02.356760 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twx92" event={"ID":"a3e85f87-765d-44bb-9187-6e5276434b05","Type":"ContainerStarted","Data":"f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b"} Jan 24 00:37:02 crc kubenswrapper[4676]: I0124 00:37:02.384848 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-twx92" podStartSLOduration=2.8756215689999998 podStartE2EDuration="8.384833279s" podCreationTimestamp="2026-01-24 00:36:54 +0000 UTC" firstStartedPulling="2026-01-24 00:36:56.274296332 +0000 UTC m=+2000.304267333" lastFinishedPulling="2026-01-24 00:37:01.783508032 +0000 UTC m=+2005.813479043" observedRunningTime="2026-01-24 00:37:02.377353669 +0000 UTC m=+2006.407324670" watchObservedRunningTime="2026-01-24 00:37:02.384833279 +0000 UTC m=+2006.414804280" Jan 24 00:37:04 crc kubenswrapper[4676]: I0124 00:37:04.565625 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:37:04 crc kubenswrapper[4676]: I0124 00:37:04.566810 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:37:05 crc kubenswrapper[4676]: I0124 00:37:05.609780 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-twx92" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="registry-server" probeResult="failure" output=< Jan 24 00:37:05 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:37:05 crc kubenswrapper[4676]: > Jan 24 00:37:11 crc kubenswrapper[4676]: I0124 00:37:11.468335 4676 generic.go:334] "Generic (PLEG): container finished" podID="2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1" containerID="656d4d2d5ff068e82b90b5981ff0e141ca2026de96cf3e479d6668c7e1b25ad7" exitCode=0 Jan 24 00:37:11 crc kubenswrapper[4676]: I0124 00:37:11.468491 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" event={"ID":"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1","Type":"ContainerDied","Data":"656d4d2d5ff068e82b90b5981ff0e141ca2026de96cf3e479d6668c7e1b25ad7"} Jan 24 00:37:12 crc kubenswrapper[4676]: I0124 00:37:12.967808 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.030846 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m52h\" (UniqueName: \"kubernetes.io/projected/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-kube-api-access-9m52h\") pod \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.030938 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-ssh-key-openstack-edpm-ipam\") pod \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.031137 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-inventory\") pod \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\" (UID: \"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1\") " Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.037561 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-kube-api-access-9m52h" (OuterVolumeSpecName: "kube-api-access-9m52h") pod "2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1" (UID: "2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1"). InnerVolumeSpecName "kube-api-access-9m52h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.057288 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1" (UID: "2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.060701 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-inventory" (OuterVolumeSpecName: "inventory") pod "2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1" (UID: "2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.133659 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.133691 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m52h\" (UniqueName: \"kubernetes.io/projected/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-kube-api-access-9m52h\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.133702 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.492967 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" event={"ID":"2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1","Type":"ContainerDied","Data":"084b4e0ab44d55e490dea686f2dbac622f75b83ccd35b40989533f2257cebbae"} Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.493028 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084b4e0ab44d55e490dea686f2dbac622f75b83ccd35b40989533f2257cebbae" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.493042 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mhrvn" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.581485 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht"] Jan 24 00:37:13 crc kubenswrapper[4676]: E0124 00:37:13.582076 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.582103 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.582479 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.583527 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.586284 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.586496 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.586795 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.587900 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.596797 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht"] Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.643785 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.643837 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.644314 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45x6\" (UniqueName: \"kubernetes.io/projected/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-kube-api-access-z45x6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.745978 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z45x6\" (UniqueName: \"kubernetes.io/projected/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-kube-api-access-z45x6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.746264 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.746301 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.752345 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.753117 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.772979 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z45x6\" (UniqueName: \"kubernetes.io/projected/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-kube-api-access-z45x6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-448ht\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:13 crc kubenswrapper[4676]: I0124 00:37:13.899422 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:14 crc kubenswrapper[4676]: I0124 00:37:14.448754 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht"] Jan 24 00:37:14 crc kubenswrapper[4676]: I0124 00:37:14.501089 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" event={"ID":"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5","Type":"ContainerStarted","Data":"52bbd7242006b7a6c49596bb17ec3e9d18b2e470c6f66a5c43577c74ed4cbaf6"} Jan 24 00:37:14 crc kubenswrapper[4676]: I0124 00:37:14.608785 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:37:14 crc kubenswrapper[4676]: I0124 00:37:14.674192 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:37:14 crc kubenswrapper[4676]: I0124 00:37:14.862183 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twx92"] Jan 24 00:37:15 crc kubenswrapper[4676]: I0124 00:37:15.511838 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" event={"ID":"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5","Type":"ContainerStarted","Data":"5b3a2fbac507da3687186877115111b773c67d1e6dc327fc4ad3f094369418cb"} Jan 24 00:37:15 crc kubenswrapper[4676]: I0124 00:37:15.544358 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" podStartSLOduration=1.99945335 podStartE2EDuration="2.54432338s" podCreationTimestamp="2026-01-24 00:37:13 +0000 UTC" firstStartedPulling="2026-01-24 00:37:14.456596101 +0000 UTC m=+2018.486567112" lastFinishedPulling="2026-01-24 00:37:15.001466111 +0000 UTC m=+2019.031437142" observedRunningTime="2026-01-24 00:37:15.536738267 +0000 UTC m=+2019.566709288" watchObservedRunningTime="2026-01-24 00:37:15.54432338 +0000 UTC m=+2019.574294431" Jan 24 00:37:16 crc kubenswrapper[4676]: I0124 00:37:16.525347 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-twx92" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="registry-server" containerID="cri-o://f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b" gracePeriod=2 Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.011988 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.062393 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-catalog-content\") pod \"a3e85f87-765d-44bb-9187-6e5276434b05\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.062496 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxlrz\" (UniqueName: \"kubernetes.io/projected/a3e85f87-765d-44bb-9187-6e5276434b05-kube-api-access-fxlrz\") pod \"a3e85f87-765d-44bb-9187-6e5276434b05\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.062580 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-utilities\") pod \"a3e85f87-765d-44bb-9187-6e5276434b05\" (UID: \"a3e85f87-765d-44bb-9187-6e5276434b05\") " Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.066935 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-utilities" (OuterVolumeSpecName: "utilities") pod "a3e85f87-765d-44bb-9187-6e5276434b05" (UID: "a3e85f87-765d-44bb-9187-6e5276434b05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.077293 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e85f87-765d-44bb-9187-6e5276434b05-kube-api-access-fxlrz" (OuterVolumeSpecName: "kube-api-access-fxlrz") pod "a3e85f87-765d-44bb-9187-6e5276434b05" (UID: "a3e85f87-765d-44bb-9187-6e5276434b05"). InnerVolumeSpecName "kube-api-access-fxlrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.164241 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.164287 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxlrz\" (UniqueName: \"kubernetes.io/projected/a3e85f87-765d-44bb-9187-6e5276434b05-kube-api-access-fxlrz\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.190769 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3e85f87-765d-44bb-9187-6e5276434b05" (UID: "a3e85f87-765d-44bb-9187-6e5276434b05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.266358 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3e85f87-765d-44bb-9187-6e5276434b05-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.540270 4676 generic.go:334] "Generic (PLEG): container finished" podID="a3e85f87-765d-44bb-9187-6e5276434b05" containerID="f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b" exitCode=0 Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.540328 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twx92" event={"ID":"a3e85f87-765d-44bb-9187-6e5276434b05","Type":"ContainerDied","Data":"f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b"} Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.540401 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twx92" event={"ID":"a3e85f87-765d-44bb-9187-6e5276434b05","Type":"ContainerDied","Data":"12f78548df580a50ff02c0e908a128c3171c6cd389e2bfb1762c569bef21f86a"} Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.540433 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twx92" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.540435 4676 scope.go:117] "RemoveContainer" containerID="f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.579887 4676 scope.go:117] "RemoveContainer" containerID="7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.585128 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twx92"] Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.604222 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-twx92"] Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.614299 4676 scope.go:117] "RemoveContainer" containerID="c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.650778 4676 scope.go:117] "RemoveContainer" containerID="f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b" Jan 24 00:37:17 crc kubenswrapper[4676]: E0124 00:37:17.651176 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b\": container with ID starting with f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b not found: ID does not exist" containerID="f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.651231 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b"} err="failed to get container status \"f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b\": rpc error: code = NotFound desc = could not find container \"f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b\": container with ID starting with f207c30ece0ab8c19fb55b049c931c60792beebf6fe8e09e87c0dde2cf78a60b not found: ID does not exist" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.651265 4676 scope.go:117] "RemoveContainer" containerID="7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c" Jan 24 00:37:17 crc kubenswrapper[4676]: E0124 00:37:17.651615 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c\": container with ID starting with 7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c not found: ID does not exist" containerID="7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.651650 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c"} err="failed to get container status \"7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c\": rpc error: code = NotFound desc = could not find container \"7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c\": container with ID starting with 7cafafc95566588ce82892ea8b197b1b19666173405b7ac3e5ed41d6f52ba98c not found: ID does not exist" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.651678 4676 scope.go:117] "RemoveContainer" containerID="c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109" Jan 24 00:37:17 crc kubenswrapper[4676]: E0124 00:37:17.651943 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109\": container with ID starting with c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109 not found: ID does not exist" containerID="c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109" Jan 24 00:37:17 crc kubenswrapper[4676]: I0124 00:37:17.651981 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109"} err="failed to get container status \"c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109\": rpc error: code = NotFound desc = could not find container \"c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109\": container with ID starting with c908b71adc9cf1a9bdf7670a770704804db8201b25b1a5fbae60f99ced333109 not found: ID does not exist" Jan 24 00:37:18 crc kubenswrapper[4676]: I0124 00:37:18.274219 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" path="/var/lib/kubelet/pods/a3e85f87-765d-44bb-9187-6e5276434b05/volumes" Jan 24 00:37:25 crc kubenswrapper[4676]: I0124 00:37:25.644889 4676 generic.go:334] "Generic (PLEG): container finished" podID="447c1e1f-d798-4bcc-a8ef-91d4ad5426a5" containerID="5b3a2fbac507da3687186877115111b773c67d1e6dc327fc4ad3f094369418cb" exitCode=0 Jan 24 00:37:25 crc kubenswrapper[4676]: I0124 00:37:25.645580 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" event={"ID":"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5","Type":"ContainerDied","Data":"5b3a2fbac507da3687186877115111b773c67d1e6dc327fc4ad3f094369418cb"} Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.055170 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.163456 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-ssh-key-openstack-edpm-ipam\") pod \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.163581 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z45x6\" (UniqueName: \"kubernetes.io/projected/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-kube-api-access-z45x6\") pod \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.163663 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-inventory\") pod \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\" (UID: \"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5\") " Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.172627 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-kube-api-access-z45x6" (OuterVolumeSpecName: "kube-api-access-z45x6") pod "447c1e1f-d798-4bcc-a8ef-91d4ad5426a5" (UID: "447c1e1f-d798-4bcc-a8ef-91d4ad5426a5"). InnerVolumeSpecName "kube-api-access-z45x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.194621 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "447c1e1f-d798-4bcc-a8ef-91d4ad5426a5" (UID: "447c1e1f-d798-4bcc-a8ef-91d4ad5426a5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.198260 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-inventory" (OuterVolumeSpecName: "inventory") pod "447c1e1f-d798-4bcc-a8ef-91d4ad5426a5" (UID: "447c1e1f-d798-4bcc-a8ef-91d4ad5426a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.267012 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.267047 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z45x6\" (UniqueName: \"kubernetes.io/projected/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-kube-api-access-z45x6\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.267060 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447c1e1f-d798-4bcc-a8ef-91d4ad5426a5-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.666104 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" event={"ID":"447c1e1f-d798-4bcc-a8ef-91d4ad5426a5","Type":"ContainerDied","Data":"52bbd7242006b7a6c49596bb17ec3e9d18b2e470c6f66a5c43577c74ed4cbaf6"} Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.666150 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52bbd7242006b7a6c49596bb17ec3e9d18b2e470c6f66a5c43577c74ed4cbaf6" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.666215 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-448ht" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.807924 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs"] Jan 24 00:37:27 crc kubenswrapper[4676]: E0124 00:37:27.808368 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="registry-server" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.808406 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="registry-server" Jan 24 00:37:27 crc kubenswrapper[4676]: E0124 00:37:27.808426 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="extract-content" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.808435 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="extract-content" Jan 24 00:37:27 crc kubenswrapper[4676]: E0124 00:37:27.808464 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447c1e1f-d798-4bcc-a8ef-91d4ad5426a5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.808474 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="447c1e1f-d798-4bcc-a8ef-91d4ad5426a5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:37:27 crc kubenswrapper[4676]: E0124 00:37:27.808501 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="extract-utilities" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.808511 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="extract-utilities" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.808727 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e85f87-765d-44bb-9187-6e5276434b05" containerName="registry-server" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.808779 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="447c1e1f-d798-4bcc-a8ef-91d4ad5426a5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.809552 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.812706 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.812712 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.813963 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.814519 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.814552 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.814960 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.817017 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.818180 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.818242 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs"] Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.882581 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqhf5\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-kube-api-access-zqhf5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.882673 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.882837 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.882917 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.882986 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883069 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883237 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883327 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883429 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883503 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883613 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883746 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883784 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.883873 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985361 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985459 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985501 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985535 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985586 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985630 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985653 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985689 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985743 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqhf5\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-kube-api-access-zqhf5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985772 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985808 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985840 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985868 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.985900 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.990075 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.990935 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.993115 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.993623 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.995539 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.996842 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.997226 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.997303 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:27 crc kubenswrapper[4676]: I0124 00:37:27.998095 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:28 crc kubenswrapper[4676]: I0124 00:37:28.002633 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:28 crc kubenswrapper[4676]: I0124 00:37:28.003274 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:28 crc kubenswrapper[4676]: I0124 00:37:28.005240 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:28 crc kubenswrapper[4676]: I0124 00:37:28.009706 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:28 crc kubenswrapper[4676]: I0124 00:37:28.016600 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqhf5\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-kube-api-access-zqhf5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:28 crc kubenswrapper[4676]: I0124 00:37:28.171884 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:37:28 crc kubenswrapper[4676]: I0124 00:37:28.567652 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs"] Jan 24 00:37:28 crc kubenswrapper[4676]: I0124 00:37:28.674444 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" event={"ID":"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c","Type":"ContainerStarted","Data":"1de2ad796c2575a2a4e4825e8cb53fd6f1393b20d709f96e27cefc52b380319e"} Jan 24 00:37:29 crc kubenswrapper[4676]: I0124 00:37:29.703041 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" event={"ID":"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c","Type":"ContainerStarted","Data":"ea066811c3fc8f44711798a6a7f1cdd0df2c04ef08ae8fd4743b54536a040057"} Jan 24 00:37:29 crc kubenswrapper[4676]: I0124 00:37:29.741667 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" podStartSLOduration=2.324156073 podStartE2EDuration="2.741646616s" podCreationTimestamp="2026-01-24 00:37:27 +0000 UTC" firstStartedPulling="2026-01-24 00:37:28.56594606 +0000 UTC m=+2032.595917061" lastFinishedPulling="2026-01-24 00:37:28.983436603 +0000 UTC m=+2033.013407604" observedRunningTime="2026-01-24 00:37:29.737976773 +0000 UTC m=+2033.767947814" watchObservedRunningTime="2026-01-24 00:37:29.741646616 +0000 UTC m=+2033.771617637" Jan 24 00:37:39 crc kubenswrapper[4676]: I0124 00:37:39.364102 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:37:39 crc kubenswrapper[4676]: I0124 00:37:39.364844 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:38:09 crc kubenswrapper[4676]: I0124 00:38:09.364240 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:38:09 crc kubenswrapper[4676]: I0124 00:38:09.365075 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:38:14 crc kubenswrapper[4676]: I0124 00:38:14.134109 4676 generic.go:334] "Generic (PLEG): container finished" podID="e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" containerID="ea066811c3fc8f44711798a6a7f1cdd0df2c04ef08ae8fd4743b54536a040057" exitCode=0 Jan 24 00:38:14 crc kubenswrapper[4676]: I0124 00:38:14.134189 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" event={"ID":"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c","Type":"ContainerDied","Data":"ea066811c3fc8f44711798a6a7f1cdd0df2c04ef08ae8fd4743b54536a040057"} Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.607690 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.707120 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-telemetry-combined-ca-bundle\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.707367 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.707510 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-nova-combined-ca-bundle\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.707626 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-bootstrap-combined-ca-bundle\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.707698 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-libvirt-combined-ca-bundle\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.707782 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.707854 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqhf5\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-kube-api-access-zqhf5\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.707930 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-repo-setup-combined-ca-bundle\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.708006 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-inventory\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.708132 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ovn-combined-ca-bundle\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.708211 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ssh-key-openstack-edpm-ipam\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.708287 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.708417 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.708501 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-neutron-metadata-combined-ca-bundle\") pod \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\" (UID: \"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c\") " Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.715988 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.716062 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.717102 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.717240 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.717400 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.719025 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.720436 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.722353 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-kube-api-access-zqhf5" (OuterVolumeSpecName: "kube-api-access-zqhf5") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "kube-api-access-zqhf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.723010 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.725991 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.727248 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.729605 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.742831 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.743196 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-inventory" (OuterVolumeSpecName: "inventory") pod "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" (UID: "e9d2a92f-e22e-44c5-86c5-8b38824e3d4c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810324 4676 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810352 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810363 4676 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810384 4676 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810395 4676 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810404 4676 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810413 4676 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810422 4676 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810432 4676 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810440 4676 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810449 4676 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810458 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqhf5\" (UniqueName: \"kubernetes.io/projected/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-kube-api-access-zqhf5\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810467 4676 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:15 crc kubenswrapper[4676]: I0124 00:38:15.810476 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9d2a92f-e22e-44c5-86c5-8b38824e3d4c-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.158760 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" event={"ID":"e9d2a92f-e22e-44c5-86c5-8b38824e3d4c","Type":"ContainerDied","Data":"1de2ad796c2575a2a4e4825e8cb53fd6f1393b20d709f96e27cefc52b380319e"} Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.158819 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de2ad796c2575a2a4e4825e8cb53fd6f1393b20d709f96e27cefc52b380319e" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.158835 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.296455 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv"] Jan 24 00:38:16 crc kubenswrapper[4676]: E0124 00:38:16.296812 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.296832 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.297012 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d2a92f-e22e-44c5-86c5-8b38824e3d4c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.297573 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.300482 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.300534 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.300584 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.301418 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.301892 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.320261 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv"] Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.422594 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.422700 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.422751 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55444bfa-a024-4606-aa57-6456c6688e52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.422780 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n99s\" (UniqueName: \"kubernetes.io/projected/55444bfa-a024-4606-aa57-6456c6688e52-kube-api-access-2n99s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.422796 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.524969 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.525072 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55444bfa-a024-4606-aa57-6456c6688e52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.525137 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n99s\" (UniqueName: \"kubernetes.io/projected/55444bfa-a024-4606-aa57-6456c6688e52-kube-api-access-2n99s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.525160 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.525292 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.527366 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55444bfa-a024-4606-aa57-6456c6688e52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.532355 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.533434 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.538759 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.542131 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n99s\" (UniqueName: \"kubernetes.io/projected/55444bfa-a024-4606-aa57-6456c6688e52-kube-api-access-2n99s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-527bv\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.615055 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:38:16 crc kubenswrapper[4676]: I0124 00:38:16.985327 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv"] Jan 24 00:38:17 crc kubenswrapper[4676]: I0124 00:38:17.172114 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" event={"ID":"55444bfa-a024-4606-aa57-6456c6688e52","Type":"ContainerStarted","Data":"9ea45bf2e80518f4149bc0176a1374972660dc28f0ba1c11a5d6570b6eca38bf"} Jan 24 00:38:18 crc kubenswrapper[4676]: I0124 00:38:18.185277 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" event={"ID":"55444bfa-a024-4606-aa57-6456c6688e52","Type":"ContainerStarted","Data":"ff5912bb9ffd15ef3b1a9651aa8b86793aa201dc24c7e3f08fe1351670e6d7c0"} Jan 24 00:38:18 crc kubenswrapper[4676]: I0124 00:38:18.218424 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" podStartSLOduration=1.701286338 podStartE2EDuration="2.218356704s" podCreationTimestamp="2026-01-24 00:38:16 +0000 UTC" firstStartedPulling="2026-01-24 00:38:16.999632725 +0000 UTC m=+2081.029603726" lastFinishedPulling="2026-01-24 00:38:17.516703091 +0000 UTC m=+2081.546674092" observedRunningTime="2026-01-24 00:38:18.208661797 +0000 UTC m=+2082.238632838" watchObservedRunningTime="2026-01-24 00:38:18.218356704 +0000 UTC m=+2082.248327735" Jan 24 00:38:39 crc kubenswrapper[4676]: I0124 00:38:39.364435 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:38:39 crc kubenswrapper[4676]: I0124 00:38:39.365175 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:38:39 crc kubenswrapper[4676]: I0124 00:38:39.365251 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:38:39 crc kubenswrapper[4676]: I0124 00:38:39.366604 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"658098ecc7ebeb43955aee4f3317cbcc452c89ec3e2a5ddc24ed196ea90b98d2"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:38:39 crc kubenswrapper[4676]: I0124 00:38:39.366727 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://658098ecc7ebeb43955aee4f3317cbcc452c89ec3e2a5ddc24ed196ea90b98d2" gracePeriod=600 Jan 24 00:38:40 crc kubenswrapper[4676]: I0124 00:38:40.401604 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="658098ecc7ebeb43955aee4f3317cbcc452c89ec3e2a5ddc24ed196ea90b98d2" exitCode=0 Jan 24 00:38:40 crc kubenswrapper[4676]: I0124 00:38:40.401676 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"658098ecc7ebeb43955aee4f3317cbcc452c89ec3e2a5ddc24ed196ea90b98d2"} Jan 24 00:38:40 crc kubenswrapper[4676]: I0124 00:38:40.402100 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac"} Jan 24 00:38:40 crc kubenswrapper[4676]: I0124 00:38:40.402125 4676 scope.go:117] "RemoveContainer" containerID="9c504e3091e16d8ba5c5f912afa13f14ae4119c50ca2e112cebacbce89bb6751" Jan 24 00:39:34 crc kubenswrapper[4676]: I0124 00:39:34.916320 4676 generic.go:334] "Generic (PLEG): container finished" podID="55444bfa-a024-4606-aa57-6456c6688e52" containerID="ff5912bb9ffd15ef3b1a9651aa8b86793aa201dc24c7e3f08fe1351670e6d7c0" exitCode=0 Jan 24 00:39:34 crc kubenswrapper[4676]: I0124 00:39:34.916418 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" event={"ID":"55444bfa-a024-4606-aa57-6456c6688e52","Type":"ContainerDied","Data":"ff5912bb9ffd15ef3b1a9651aa8b86793aa201dc24c7e3f08fe1351670e6d7c0"} Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.443944 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.625891 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-inventory\") pod \"55444bfa-a024-4606-aa57-6456c6688e52\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.626032 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n99s\" (UniqueName: \"kubernetes.io/projected/55444bfa-a024-4606-aa57-6456c6688e52-kube-api-access-2n99s\") pod \"55444bfa-a024-4606-aa57-6456c6688e52\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.627156 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ssh-key-openstack-edpm-ipam\") pod \"55444bfa-a024-4606-aa57-6456c6688e52\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.627249 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ovn-combined-ca-bundle\") pod \"55444bfa-a024-4606-aa57-6456c6688e52\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.627283 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55444bfa-a024-4606-aa57-6456c6688e52-ovncontroller-config-0\") pod \"55444bfa-a024-4606-aa57-6456c6688e52\" (UID: \"55444bfa-a024-4606-aa57-6456c6688e52\") " Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.632851 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55444bfa-a024-4606-aa57-6456c6688e52-kube-api-access-2n99s" (OuterVolumeSpecName: "kube-api-access-2n99s") pod "55444bfa-a024-4606-aa57-6456c6688e52" (UID: "55444bfa-a024-4606-aa57-6456c6688e52"). InnerVolumeSpecName "kube-api-access-2n99s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.636526 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "55444bfa-a024-4606-aa57-6456c6688e52" (UID: "55444bfa-a024-4606-aa57-6456c6688e52"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.658534 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-inventory" (OuterVolumeSpecName: "inventory") pod "55444bfa-a024-4606-aa57-6456c6688e52" (UID: "55444bfa-a024-4606-aa57-6456c6688e52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.672938 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55444bfa-a024-4606-aa57-6456c6688e52-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "55444bfa-a024-4606-aa57-6456c6688e52" (UID: "55444bfa-a024-4606-aa57-6456c6688e52"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.672986 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "55444bfa-a024-4606-aa57-6456c6688e52" (UID: "55444bfa-a024-4606-aa57-6456c6688e52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.729902 4676 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.730066 4676 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/55444bfa-a024-4606-aa57-6456c6688e52-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.730150 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.730353 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n99s\" (UniqueName: \"kubernetes.io/projected/55444bfa-a024-4606-aa57-6456c6688e52-kube-api-access-2n99s\") on node \"crc\" DevicePath \"\"" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.730482 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55444bfa-a024-4606-aa57-6456c6688e52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.936570 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" event={"ID":"55444bfa-a024-4606-aa57-6456c6688e52","Type":"ContainerDied","Data":"9ea45bf2e80518f4149bc0176a1374972660dc28f0ba1c11a5d6570b6eca38bf"} Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.936946 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea45bf2e80518f4149bc0176a1374972660dc28f0ba1c11a5d6570b6eca38bf" Jan 24 00:39:36 crc kubenswrapper[4676]: I0124 00:39:36.936630 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-527bv" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.070187 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7"] Jan 24 00:39:37 crc kubenswrapper[4676]: E0124 00:39:37.070707 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55444bfa-a024-4606-aa57-6456c6688e52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.070728 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="55444bfa-a024-4606-aa57-6456c6688e52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.070944 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="55444bfa-a024-4606-aa57-6456c6688e52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.071833 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.074134 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.074336 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.074583 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.074751 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.075112 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.075844 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.081724 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7"] Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.142197 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.142324 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.142563 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.142921 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.142964 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.143039 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq97x\" (UniqueName: \"kubernetes.io/projected/cba640eb-c65f-46be-af5d-5126418c361a-kube-api-access-qq97x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.244409 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.244521 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.244545 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.244618 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.244641 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.244662 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq97x\" (UniqueName: \"kubernetes.io/projected/cba640eb-c65f-46be-af5d-5126418c361a-kube-api-access-qq97x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.250186 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.251853 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.253213 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.254121 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.257073 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.260680 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq97x\" (UniqueName: \"kubernetes.io/projected/cba640eb-c65f-46be-af5d-5126418c361a-kube-api-access-qq97x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:37 crc kubenswrapper[4676]: I0124 00:39:37.401563 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:39:38 crc kubenswrapper[4676]: I0124 00:39:38.006982 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7"] Jan 24 00:39:38 crc kubenswrapper[4676]: I0124 00:39:38.954929 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" event={"ID":"cba640eb-c65f-46be-af5d-5126418c361a","Type":"ContainerStarted","Data":"768d4cbb4523908822a2ad0486caa07ddbcb8814acacc69c2b351ff7ced7070f"} Jan 24 00:39:38 crc kubenswrapper[4676]: I0124 00:39:38.955178 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" event={"ID":"cba640eb-c65f-46be-af5d-5126418c361a","Type":"ContainerStarted","Data":"87713fa55037d8facd261b161912fac21124b374f9253de8714b996f2b48a4c2"} Jan 24 00:39:38 crc kubenswrapper[4676]: I0124 00:39:38.984363 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" podStartSLOduration=1.433218047 podStartE2EDuration="1.98433532s" podCreationTimestamp="2026-01-24 00:39:37 +0000 UTC" firstStartedPulling="2026-01-24 00:39:38.02554608 +0000 UTC m=+2162.055517101" lastFinishedPulling="2026-01-24 00:39:38.576663373 +0000 UTC m=+2162.606634374" observedRunningTime="2026-01-24 00:39:38.97591325 +0000 UTC m=+2163.005884271" watchObservedRunningTime="2026-01-24 00:39:38.98433532 +0000 UTC m=+2163.014306341" Jan 24 00:40:38 crc kubenswrapper[4676]: I0124 00:40:38.542698 4676 generic.go:334] "Generic (PLEG): container finished" podID="cba640eb-c65f-46be-af5d-5126418c361a" containerID="768d4cbb4523908822a2ad0486caa07ddbcb8814acacc69c2b351ff7ced7070f" exitCode=0 Jan 24 00:40:38 crc kubenswrapper[4676]: I0124 00:40:38.542750 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" event={"ID":"cba640eb-c65f-46be-af5d-5126418c361a","Type":"ContainerDied","Data":"768d4cbb4523908822a2ad0486caa07ddbcb8814acacc69c2b351ff7ced7070f"} Jan 24 00:40:39 crc kubenswrapper[4676]: I0124 00:40:39.364817 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:40:39 crc kubenswrapper[4676]: I0124 00:40:39.365119 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:40:39 crc kubenswrapper[4676]: I0124 00:40:39.980862 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.069171 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-ssh-key-openstack-edpm-ipam\") pod \"cba640eb-c65f-46be-af5d-5126418c361a\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.069302 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-metadata-combined-ca-bundle\") pod \"cba640eb-c65f-46be-af5d-5126418c361a\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.069359 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"cba640eb-c65f-46be-af5d-5126418c361a\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.069409 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-nova-metadata-neutron-config-0\") pod \"cba640eb-c65f-46be-af5d-5126418c361a\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.069447 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-inventory\") pod \"cba640eb-c65f-46be-af5d-5126418c361a\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.069510 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq97x\" (UniqueName: \"kubernetes.io/projected/cba640eb-c65f-46be-af5d-5126418c361a-kube-api-access-qq97x\") pod \"cba640eb-c65f-46be-af5d-5126418c361a\" (UID: \"cba640eb-c65f-46be-af5d-5126418c361a\") " Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.079227 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba640eb-c65f-46be-af5d-5126418c361a-kube-api-access-qq97x" (OuterVolumeSpecName: "kube-api-access-qq97x") pod "cba640eb-c65f-46be-af5d-5126418c361a" (UID: "cba640eb-c65f-46be-af5d-5126418c361a"). InnerVolumeSpecName "kube-api-access-qq97x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.083791 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cba640eb-c65f-46be-af5d-5126418c361a" (UID: "cba640eb-c65f-46be-af5d-5126418c361a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.097525 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cba640eb-c65f-46be-af5d-5126418c361a" (UID: "cba640eb-c65f-46be-af5d-5126418c361a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.098991 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "cba640eb-c65f-46be-af5d-5126418c361a" (UID: "cba640eb-c65f-46be-af5d-5126418c361a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.102706 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-inventory" (OuterVolumeSpecName: "inventory") pod "cba640eb-c65f-46be-af5d-5126418c361a" (UID: "cba640eb-c65f-46be-af5d-5126418c361a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.114573 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "cba640eb-c65f-46be-af5d-5126418c361a" (UID: "cba640eb-c65f-46be-af5d-5126418c361a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.172197 4676 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.172273 4676 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.172288 4676 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.172306 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.172320 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq97x\" (UniqueName: \"kubernetes.io/projected/cba640eb-c65f-46be-af5d-5126418c361a-kube-api-access-qq97x\") on node \"crc\" DevicePath \"\"" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.172332 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cba640eb-c65f-46be-af5d-5126418c361a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.565214 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" event={"ID":"cba640eb-c65f-46be-af5d-5126418c361a","Type":"ContainerDied","Data":"87713fa55037d8facd261b161912fac21124b374f9253de8714b996f2b48a4c2"} Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.565249 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.565270 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87713fa55037d8facd261b161912fac21124b374f9253de8714b996f2b48a4c2" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.691845 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99"] Jan 24 00:40:40 crc kubenswrapper[4676]: E0124 00:40:40.692355 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba640eb-c65f-46be-af5d-5126418c361a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.692408 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba640eb-c65f-46be-af5d-5126418c361a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.692793 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba640eb-c65f-46be-af5d-5126418c361a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.694403 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.699309 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.700429 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.700733 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.701972 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.704225 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.715916 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99"] Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.785126 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.785203 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.785393 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.785460 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h6gz\" (UniqueName: \"kubernetes.io/projected/2cafe497-da96-4b39-bec2-1ec54f859303-kube-api-access-9h6gz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.785585 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.887328 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.887402 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h6gz\" (UniqueName: \"kubernetes.io/projected/2cafe497-da96-4b39-bec2-1ec54f859303-kube-api-access-9h6gz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.887464 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.887515 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.887555 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.893308 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.893607 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.894660 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.895144 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:40 crc kubenswrapper[4676]: I0124 00:40:40.910856 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h6gz\" (UniqueName: \"kubernetes.io/projected/2cafe497-da96-4b39-bec2-1ec54f859303-kube-api-access-9h6gz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ddk99\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:41 crc kubenswrapper[4676]: I0124 00:40:41.018527 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:40:41 crc kubenswrapper[4676]: I0124 00:40:41.607532 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99"] Jan 24 00:40:41 crc kubenswrapper[4676]: I0124 00:40:41.607788 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:40:42 crc kubenswrapper[4676]: I0124 00:40:42.588232 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" event={"ID":"2cafe497-da96-4b39-bec2-1ec54f859303","Type":"ContainerStarted","Data":"9bb06db0d0e1ec2c2b37d9e152501f764e96d00ae7a309b970bb6c0a4f2a01db"} Jan 24 00:40:42 crc kubenswrapper[4676]: I0124 00:40:42.589415 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" event={"ID":"2cafe497-da96-4b39-bec2-1ec54f859303","Type":"ContainerStarted","Data":"1547e9b9c1a3265026b8579e925dfd46811b1b86e0e0cebd473a29138f45bc55"} Jan 24 00:40:42 crc kubenswrapper[4676]: I0124 00:40:42.616616 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" podStartSLOduration=2.072917046 podStartE2EDuration="2.616599349s" podCreationTimestamp="2026-01-24 00:40:40 +0000 UTC" firstStartedPulling="2026-01-24 00:40:41.607542919 +0000 UTC m=+2225.637513920" lastFinishedPulling="2026-01-24 00:40:42.151225222 +0000 UTC m=+2226.181196223" observedRunningTime="2026-01-24 00:40:42.615241307 +0000 UTC m=+2226.645212308" watchObservedRunningTime="2026-01-24 00:40:42.616599349 +0000 UTC m=+2226.646570350" Jan 24 00:41:08 crc kubenswrapper[4676]: I0124 00:41:08.941014 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f4gcx"] Jan 24 00:41:08 crc kubenswrapper[4676]: I0124 00:41:08.946324 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:08 crc kubenswrapper[4676]: I0124 00:41:08.958369 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4gcx"] Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.016368 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-catalog-content\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.016604 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-utilities\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.016744 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjcm2\" (UniqueName: \"kubernetes.io/projected/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-kube-api-access-bjcm2\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.118240 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjcm2\" (UniqueName: \"kubernetes.io/projected/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-kube-api-access-bjcm2\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.118334 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-catalog-content\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.118542 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-utilities\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.118901 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-catalog-content\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.118970 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-utilities\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.149022 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjcm2\" (UniqueName: \"kubernetes.io/projected/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-kube-api-access-bjcm2\") pod \"certified-operators-f4gcx\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.279915 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.364770 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.364822 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:41:09 crc kubenswrapper[4676]: I0124 00:41:09.881740 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4gcx"] Jan 24 00:41:10 crc kubenswrapper[4676]: I0124 00:41:10.902197 4676 generic.go:334] "Generic (PLEG): container finished" podID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerID="737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7" exitCode=0 Jan 24 00:41:10 crc kubenswrapper[4676]: I0124 00:41:10.902249 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gcx" event={"ID":"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847","Type":"ContainerDied","Data":"737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7"} Jan 24 00:41:10 crc kubenswrapper[4676]: I0124 00:41:10.902570 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gcx" event={"ID":"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847","Type":"ContainerStarted","Data":"dd0d72b8a2070ae18bd60f3047ac43dfaafa2cc30c6fa2ae4ea63352fa577c6b"} Jan 24 00:41:12 crc kubenswrapper[4676]: I0124 00:41:12.926171 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gcx" event={"ID":"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847","Type":"ContainerStarted","Data":"5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5"} Jan 24 00:41:13 crc kubenswrapper[4676]: I0124 00:41:13.937680 4676 generic.go:334] "Generic (PLEG): container finished" podID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerID="5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5" exitCode=0 Jan 24 00:41:13 crc kubenswrapper[4676]: I0124 00:41:13.937974 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gcx" event={"ID":"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847","Type":"ContainerDied","Data":"5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5"} Jan 24 00:41:15 crc kubenswrapper[4676]: I0124 00:41:15.955440 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gcx" event={"ID":"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847","Type":"ContainerStarted","Data":"b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce"} Jan 24 00:41:15 crc kubenswrapper[4676]: I0124 00:41:15.981273 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f4gcx" podStartSLOduration=4.086522656 podStartE2EDuration="7.981255712s" podCreationTimestamp="2026-01-24 00:41:08 +0000 UTC" firstStartedPulling="2026-01-24 00:41:10.904340695 +0000 UTC m=+2254.934311696" lastFinishedPulling="2026-01-24 00:41:14.799073751 +0000 UTC m=+2258.829044752" observedRunningTime="2026-01-24 00:41:15.979323432 +0000 UTC m=+2260.009294433" watchObservedRunningTime="2026-01-24 00:41:15.981255712 +0000 UTC m=+2260.011226713" Jan 24 00:41:19 crc kubenswrapper[4676]: I0124 00:41:19.280616 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:19 crc kubenswrapper[4676]: I0124 00:41:19.281082 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:19 crc kubenswrapper[4676]: I0124 00:41:19.350672 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.083697 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.140246 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2tgmv"] Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.142032 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.175286 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tgmv"] Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.255578 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-utilities\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.255647 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-catalog-content\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.255810 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zknhs\" (UniqueName: \"kubernetes.io/projected/5b54a57e-1dba-4552-b03e-94685f89b614-kube-api-access-zknhs\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.357358 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zknhs\" (UniqueName: \"kubernetes.io/projected/5b54a57e-1dba-4552-b03e-94685f89b614-kube-api-access-zknhs\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.357431 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-utilities\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.357460 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-catalog-content\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.359073 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-utilities\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.359445 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-catalog-content\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.389825 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zknhs\" (UniqueName: \"kubernetes.io/projected/5b54a57e-1dba-4552-b03e-94685f89b614-kube-api-access-zknhs\") pod \"redhat-marketplace-2tgmv\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.461663 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:20 crc kubenswrapper[4676]: I0124 00:41:20.954106 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tgmv"] Jan 24 00:41:20 crc kubenswrapper[4676]: W0124 00:41:20.963664 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b54a57e_1dba_4552_b03e_94685f89b614.slice/crio-603cdc7a991c7585c94a502a0ae82308201bad02f44f4c7687a51077bce3e693 WatchSource:0}: Error finding container 603cdc7a991c7585c94a502a0ae82308201bad02f44f4c7687a51077bce3e693: Status 404 returned error can't find the container with id 603cdc7a991c7585c94a502a0ae82308201bad02f44f4c7687a51077bce3e693 Jan 24 00:41:21 crc kubenswrapper[4676]: I0124 00:41:21.004749 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tgmv" event={"ID":"5b54a57e-1dba-4552-b03e-94685f89b614","Type":"ContainerStarted","Data":"603cdc7a991c7585c94a502a0ae82308201bad02f44f4c7687a51077bce3e693"} Jan 24 00:41:21 crc kubenswrapper[4676]: I0124 00:41:21.799813 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4gcx"] Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.017465 4676 generic.go:334] "Generic (PLEG): container finished" podID="5b54a57e-1dba-4552-b03e-94685f89b614" containerID="c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0" exitCode=0 Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.017552 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tgmv" event={"ID":"5b54a57e-1dba-4552-b03e-94685f89b614","Type":"ContainerDied","Data":"c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0"} Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.018189 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f4gcx" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerName="registry-server" containerID="cri-o://b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce" gracePeriod=2 Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.443248 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.505310 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjcm2\" (UniqueName: \"kubernetes.io/projected/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-kube-api-access-bjcm2\") pod \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.505550 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-catalog-content\") pod \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.505646 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-utilities\") pod \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\" (UID: \"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847\") " Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.506868 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-utilities" (OuterVolumeSpecName: "utilities") pod "96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" (UID: "96dc44b8-b0e8-4f7e-bc80-9b4cf3971847"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.512847 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-kube-api-access-bjcm2" (OuterVolumeSpecName: "kube-api-access-bjcm2") pod "96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" (UID: "96dc44b8-b0e8-4f7e-bc80-9b4cf3971847"). InnerVolumeSpecName "kube-api-access-bjcm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.566970 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" (UID: "96dc44b8-b0e8-4f7e-bc80-9b4cf3971847"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.608658 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.608704 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:41:22 crc kubenswrapper[4676]: I0124 00:41:22.608714 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjcm2\" (UniqueName: \"kubernetes.io/projected/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847-kube-api-access-bjcm2\") on node \"crc\" DevicePath \"\"" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.026291 4676 generic.go:334] "Generic (PLEG): container finished" podID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerID="b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce" exitCode=0 Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.026332 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gcx" event={"ID":"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847","Type":"ContainerDied","Data":"b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce"} Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.026359 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gcx" event={"ID":"96dc44b8-b0e8-4f7e-bc80-9b4cf3971847","Type":"ContainerDied","Data":"dd0d72b8a2070ae18bd60f3047ac43dfaafa2cc30c6fa2ae4ea63352fa577c6b"} Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.026389 4676 scope.go:117] "RemoveContainer" containerID="b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.026367 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4gcx" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.083575 4676 scope.go:117] "RemoveContainer" containerID="5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.087214 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4gcx"] Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.098455 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f4gcx"] Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.122411 4676 scope.go:117] "RemoveContainer" containerID="737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.159738 4676 scope.go:117] "RemoveContainer" containerID="b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce" Jan 24 00:41:23 crc kubenswrapper[4676]: E0124 00:41:23.160230 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce\": container with ID starting with b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce not found: ID does not exist" containerID="b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.160279 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce"} err="failed to get container status \"b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce\": rpc error: code = NotFound desc = could not find container \"b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce\": container with ID starting with b429ee368e1d2370a601fe8e4e71726df16a9d3feca5be6e6611ac0b69ecd2ce not found: ID does not exist" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.160316 4676 scope.go:117] "RemoveContainer" containerID="5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5" Jan 24 00:41:23 crc kubenswrapper[4676]: E0124 00:41:23.160699 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5\": container with ID starting with 5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5 not found: ID does not exist" containerID="5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.160748 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5"} err="failed to get container status \"5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5\": rpc error: code = NotFound desc = could not find container \"5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5\": container with ID starting with 5d44871de50f32676b8740620a80ff69be7c3c2e220b2f2348f10f88bfbeede5 not found: ID does not exist" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.160773 4676 scope.go:117] "RemoveContainer" containerID="737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7" Jan 24 00:41:23 crc kubenswrapper[4676]: E0124 00:41:23.161034 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7\": container with ID starting with 737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7 not found: ID does not exist" containerID="737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7" Jan 24 00:41:23 crc kubenswrapper[4676]: I0124 00:41:23.161058 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7"} err="failed to get container status \"737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7\": rpc error: code = NotFound desc = could not find container \"737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7\": container with ID starting with 737d34bc6c9c50d05e277f8c8b483264ff8b1d1d8f9f7474fd3cc837bd2e60f7 not found: ID does not exist" Jan 24 00:41:24 crc kubenswrapper[4676]: I0124 00:41:24.039147 4676 generic.go:334] "Generic (PLEG): container finished" podID="5b54a57e-1dba-4552-b03e-94685f89b614" containerID="a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c" exitCode=0 Jan 24 00:41:24 crc kubenswrapper[4676]: I0124 00:41:24.039557 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tgmv" event={"ID":"5b54a57e-1dba-4552-b03e-94685f89b614","Type":"ContainerDied","Data":"a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c"} Jan 24 00:41:24 crc kubenswrapper[4676]: I0124 00:41:24.271354 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" path="/var/lib/kubelet/pods/96dc44b8-b0e8-4f7e-bc80-9b4cf3971847/volumes" Jan 24 00:41:25 crc kubenswrapper[4676]: I0124 00:41:25.093849 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tgmv" event={"ID":"5b54a57e-1dba-4552-b03e-94685f89b614","Type":"ContainerStarted","Data":"37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be"} Jan 24 00:41:25 crc kubenswrapper[4676]: I0124 00:41:25.124999 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2tgmv" podStartSLOduration=2.52918708 podStartE2EDuration="5.124980312s" podCreationTimestamp="2026-01-24 00:41:20 +0000 UTC" firstStartedPulling="2026-01-24 00:41:22.020820386 +0000 UTC m=+2266.050791397" lastFinishedPulling="2026-01-24 00:41:24.616613628 +0000 UTC m=+2268.646584629" observedRunningTime="2026-01-24 00:41:25.123076922 +0000 UTC m=+2269.153047913" watchObservedRunningTime="2026-01-24 00:41:25.124980312 +0000 UTC m=+2269.154951303" Jan 24 00:41:30 crc kubenswrapper[4676]: I0124 00:41:30.486913 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:30 crc kubenswrapper[4676]: I0124 00:41:30.487810 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:30 crc kubenswrapper[4676]: I0124 00:41:30.549996 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:31 crc kubenswrapper[4676]: I0124 00:41:31.225461 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:31 crc kubenswrapper[4676]: I0124 00:41:31.294010 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tgmv"] Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.177337 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2tgmv" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" containerName="registry-server" containerID="cri-o://37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be" gracePeriod=2 Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.672467 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.760329 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-utilities\") pod \"5b54a57e-1dba-4552-b03e-94685f89b614\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.760544 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-catalog-content\") pod \"5b54a57e-1dba-4552-b03e-94685f89b614\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.760607 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zknhs\" (UniqueName: \"kubernetes.io/projected/5b54a57e-1dba-4552-b03e-94685f89b614-kube-api-access-zknhs\") pod \"5b54a57e-1dba-4552-b03e-94685f89b614\" (UID: \"5b54a57e-1dba-4552-b03e-94685f89b614\") " Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.761263 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-utilities" (OuterVolumeSpecName: "utilities") pod "5b54a57e-1dba-4552-b03e-94685f89b614" (UID: "5b54a57e-1dba-4552-b03e-94685f89b614"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.766631 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b54a57e-1dba-4552-b03e-94685f89b614-kube-api-access-zknhs" (OuterVolumeSpecName: "kube-api-access-zknhs") pod "5b54a57e-1dba-4552-b03e-94685f89b614" (UID: "5b54a57e-1dba-4552-b03e-94685f89b614"). InnerVolumeSpecName "kube-api-access-zknhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.781868 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b54a57e-1dba-4552-b03e-94685f89b614" (UID: "5b54a57e-1dba-4552-b03e-94685f89b614"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.864255 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.864305 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b54a57e-1dba-4552-b03e-94685f89b614-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:41:33 crc kubenswrapper[4676]: I0124 00:41:33.864328 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zknhs\" (UniqueName: \"kubernetes.io/projected/5b54a57e-1dba-4552-b03e-94685f89b614-kube-api-access-zknhs\") on node \"crc\" DevicePath \"\"" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.189503 4676 generic.go:334] "Generic (PLEG): container finished" podID="5b54a57e-1dba-4552-b03e-94685f89b614" containerID="37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be" exitCode=0 Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.189572 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tgmv" event={"ID":"5b54a57e-1dba-4552-b03e-94685f89b614","Type":"ContainerDied","Data":"37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be"} Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.189604 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tgmv" event={"ID":"5b54a57e-1dba-4552-b03e-94685f89b614","Type":"ContainerDied","Data":"603cdc7a991c7585c94a502a0ae82308201bad02f44f4c7687a51077bce3e693"} Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.189625 4676 scope.go:117] "RemoveContainer" containerID="37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.192300 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tgmv" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.242824 4676 scope.go:117] "RemoveContainer" containerID="a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.274580 4676 scope.go:117] "RemoveContainer" containerID="c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.276902 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tgmv"] Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.277062 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tgmv"] Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.332662 4676 scope.go:117] "RemoveContainer" containerID="37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be" Jan 24 00:41:34 crc kubenswrapper[4676]: E0124 00:41:34.333525 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be\": container with ID starting with 37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be not found: ID does not exist" containerID="37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.333578 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be"} err="failed to get container status \"37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be\": rpc error: code = NotFound desc = could not find container \"37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be\": container with ID starting with 37c44ddc8bf250f1ae7443ad12da8e6ead1f79c51b0da873d307014e625972be not found: ID does not exist" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.333611 4676 scope.go:117] "RemoveContainer" containerID="a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c" Jan 24 00:41:34 crc kubenswrapper[4676]: E0124 00:41:34.334087 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c\": container with ID starting with a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c not found: ID does not exist" containerID="a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.334108 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c"} err="failed to get container status \"a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c\": rpc error: code = NotFound desc = could not find container \"a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c\": container with ID starting with a8722a93ad722ca622f815f683251c0b2fa480044fc8a3a7f7bbcb10216c506c not found: ID does not exist" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.334122 4676 scope.go:117] "RemoveContainer" containerID="c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0" Jan 24 00:41:34 crc kubenswrapper[4676]: E0124 00:41:34.334524 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0\": container with ID starting with c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0 not found: ID does not exist" containerID="c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0" Jan 24 00:41:34 crc kubenswrapper[4676]: I0124 00:41:34.334547 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0"} err="failed to get container status \"c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0\": rpc error: code = NotFound desc = could not find container \"c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0\": container with ID starting with c2701b192e86de32a0ec79475d981d5c7c6144cfecdced05bb10231d98dfc4e0 not found: ID does not exist" Jan 24 00:41:36 crc kubenswrapper[4676]: I0124 00:41:36.291772 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" path="/var/lib/kubelet/pods/5b54a57e-1dba-4552-b03e-94685f89b614/volumes" Jan 24 00:41:39 crc kubenswrapper[4676]: I0124 00:41:39.364595 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:41:39 crc kubenswrapper[4676]: I0124 00:41:39.364926 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:41:39 crc kubenswrapper[4676]: I0124 00:41:39.364964 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:41:39 crc kubenswrapper[4676]: I0124 00:41:39.365777 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:41:39 crc kubenswrapper[4676]: I0124 00:41:39.365832 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" gracePeriod=600 Jan 24 00:41:39 crc kubenswrapper[4676]: E0124 00:41:39.502644 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:41:40 crc kubenswrapper[4676]: I0124 00:41:40.288272 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" exitCode=0 Jan 24 00:41:40 crc kubenswrapper[4676]: I0124 00:41:40.289575 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac"} Jan 24 00:41:40 crc kubenswrapper[4676]: I0124 00:41:40.289655 4676 scope.go:117] "RemoveContainer" containerID="658098ecc7ebeb43955aee4f3317cbcc452c89ec3e2a5ddc24ed196ea90b98d2" Jan 24 00:41:40 crc kubenswrapper[4676]: I0124 00:41:40.290610 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:41:40 crc kubenswrapper[4676]: E0124 00:41:40.291063 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:41:54 crc kubenswrapper[4676]: I0124 00:41:54.255916 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:41:54 crc kubenswrapper[4676]: E0124 00:41:54.256788 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:42:05 crc kubenswrapper[4676]: I0124 00:42:05.256580 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:42:05 crc kubenswrapper[4676]: E0124 00:42:05.257213 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:42:18 crc kubenswrapper[4676]: I0124 00:42:18.257945 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:42:18 crc kubenswrapper[4676]: E0124 00:42:18.259346 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:42:32 crc kubenswrapper[4676]: I0124 00:42:32.258006 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:42:32 crc kubenswrapper[4676]: E0124 00:42:32.259183 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:42:45 crc kubenswrapper[4676]: I0124 00:42:45.257430 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:42:45 crc kubenswrapper[4676]: E0124 00:42:45.258591 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:42:57 crc kubenswrapper[4676]: I0124 00:42:57.255989 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:42:57 crc kubenswrapper[4676]: E0124 00:42:57.257142 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:43:10 crc kubenswrapper[4676]: I0124 00:43:10.256268 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:43:10 crc kubenswrapper[4676]: E0124 00:43:10.257409 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:43:21 crc kubenswrapper[4676]: I0124 00:43:21.257436 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:43:21 crc kubenswrapper[4676]: E0124 00:43:21.260989 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:43:36 crc kubenswrapper[4676]: I0124 00:43:36.265981 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:43:36 crc kubenswrapper[4676]: E0124 00:43:36.266732 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:43:47 crc kubenswrapper[4676]: I0124 00:43:47.256058 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:43:47 crc kubenswrapper[4676]: E0124 00:43:47.256994 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:43:58 crc kubenswrapper[4676]: I0124 00:43:58.255708 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:43:58 crc kubenswrapper[4676]: E0124 00:43:58.256522 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:44:09 crc kubenswrapper[4676]: I0124 00:44:09.255749 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:44:09 crc kubenswrapper[4676]: E0124 00:44:09.256289 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:44:20 crc kubenswrapper[4676]: I0124 00:44:20.256126 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:44:20 crc kubenswrapper[4676]: E0124 00:44:20.257022 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:44:35 crc kubenswrapper[4676]: I0124 00:44:35.260075 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:44:35 crc kubenswrapper[4676]: E0124 00:44:35.261189 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:44:47 crc kubenswrapper[4676]: I0124 00:44:47.255993 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:44:47 crc kubenswrapper[4676]: E0124 00:44:47.256786 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:44:59 crc kubenswrapper[4676]: I0124 00:44:59.257168 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:44:59 crc kubenswrapper[4676]: E0124 00:44:59.258165 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.167979 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt"] Jan 24 00:45:00 crc kubenswrapper[4676]: E0124 00:45:00.168923 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" containerName="registry-server" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.168970 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" containerName="registry-server" Jan 24 00:45:00 crc kubenswrapper[4676]: E0124 00:45:00.169032 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerName="registry-server" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.169052 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerName="registry-server" Jan 24 00:45:00 crc kubenswrapper[4676]: E0124 00:45:00.169075 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" containerName="extract-content" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.169092 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" containerName="extract-content" Jan 24 00:45:00 crc kubenswrapper[4676]: E0124 00:45:00.169117 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerName="extract-content" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.169132 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerName="extract-content" Jan 24 00:45:00 crc kubenswrapper[4676]: E0124 00:45:00.169169 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" containerName="extract-utilities" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.169186 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" containerName="extract-utilities" Jan 24 00:45:00 crc kubenswrapper[4676]: E0124 00:45:00.169239 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerName="extract-utilities" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.169258 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerName="extract-utilities" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.169727 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b54a57e-1dba-4552-b03e-94685f89b614" containerName="registry-server" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.169783 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="96dc44b8-b0e8-4f7e-bc80-9b4cf3971847" containerName="registry-server" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.171252 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.175480 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.188082 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt"] Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.189522 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.267226 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lfg\" (UniqueName: \"kubernetes.io/projected/8d2829b1-9d20-469f-8c25-475f980b00f1-kube-api-access-p5lfg\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.267461 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d2829b1-9d20-469f-8c25-475f980b00f1-secret-volume\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.267491 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d2829b1-9d20-469f-8c25-475f980b00f1-config-volume\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.369434 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lfg\" (UniqueName: \"kubernetes.io/projected/8d2829b1-9d20-469f-8c25-475f980b00f1-kube-api-access-p5lfg\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.369476 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d2829b1-9d20-469f-8c25-475f980b00f1-secret-volume\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.369505 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d2829b1-9d20-469f-8c25-475f980b00f1-config-volume\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.370272 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d2829b1-9d20-469f-8c25-475f980b00f1-config-volume\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.378203 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d2829b1-9d20-469f-8c25-475f980b00f1-secret-volume\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.390156 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lfg\" (UniqueName: \"kubernetes.io/projected/8d2829b1-9d20-469f-8c25-475f980b00f1-kube-api-access-p5lfg\") pod \"collect-profiles-29486925-f7sqt\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.491582 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:00 crc kubenswrapper[4676]: I0124 00:45:00.960276 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt"] Jan 24 00:45:01 crc kubenswrapper[4676]: I0124 00:45:01.546186 4676 generic.go:334] "Generic (PLEG): container finished" podID="8d2829b1-9d20-469f-8c25-475f980b00f1" containerID="8a48ddd1d6c47259792d6571cdd4d4b680b060204c9a096dda74141930a9ad12" exitCode=0 Jan 24 00:45:01 crc kubenswrapper[4676]: I0124 00:45:01.546299 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" event={"ID":"8d2829b1-9d20-469f-8c25-475f980b00f1","Type":"ContainerDied","Data":"8a48ddd1d6c47259792d6571cdd4d4b680b060204c9a096dda74141930a9ad12"} Jan 24 00:45:01 crc kubenswrapper[4676]: I0124 00:45:01.547539 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" event={"ID":"8d2829b1-9d20-469f-8c25-475f980b00f1","Type":"ContainerStarted","Data":"960fa06c3628fbca73cfcc44d608c7a1780cd8a341d3c0e7d4ffb4c586e5c67d"} Jan 24 00:45:02 crc kubenswrapper[4676]: I0124 00:45:02.891658 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.017137 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d2829b1-9d20-469f-8c25-475f980b00f1-config-volume\") pod \"8d2829b1-9d20-469f-8c25-475f980b00f1\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.017612 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d2829b1-9d20-469f-8c25-475f980b00f1-secret-volume\") pod \"8d2829b1-9d20-469f-8c25-475f980b00f1\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.018057 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2829b1-9d20-469f-8c25-475f980b00f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d2829b1-9d20-469f-8c25-475f980b00f1" (UID: "8d2829b1-9d20-469f-8c25-475f980b00f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.018188 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5lfg\" (UniqueName: \"kubernetes.io/projected/8d2829b1-9d20-469f-8c25-475f980b00f1-kube-api-access-p5lfg\") pod \"8d2829b1-9d20-469f-8c25-475f980b00f1\" (UID: \"8d2829b1-9d20-469f-8c25-475f980b00f1\") " Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.019081 4676 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d2829b1-9d20-469f-8c25-475f980b00f1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.023010 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2829b1-9d20-469f-8c25-475f980b00f1-kube-api-access-p5lfg" (OuterVolumeSpecName: "kube-api-access-p5lfg") pod "8d2829b1-9d20-469f-8c25-475f980b00f1" (UID: "8d2829b1-9d20-469f-8c25-475f980b00f1"). InnerVolumeSpecName "kube-api-access-p5lfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.024180 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2829b1-9d20-469f-8c25-475f980b00f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d2829b1-9d20-469f-8c25-475f980b00f1" (UID: "8d2829b1-9d20-469f-8c25-475f980b00f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.120923 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5lfg\" (UniqueName: \"kubernetes.io/projected/8d2829b1-9d20-469f-8c25-475f980b00f1-kube-api-access-p5lfg\") on node \"crc\" DevicePath \"\"" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.121128 4676 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d2829b1-9d20-469f-8c25-475f980b00f1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.570444 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" event={"ID":"8d2829b1-9d20-469f-8c25-475f980b00f1","Type":"ContainerDied","Data":"960fa06c3628fbca73cfcc44d608c7a1780cd8a341d3c0e7d4ffb4c586e5c67d"} Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.570501 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="960fa06c3628fbca73cfcc44d608c7a1780cd8a341d3c0e7d4ffb4c586e5c67d" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.570595 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486925-f7sqt" Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.967547 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg"] Jan 24 00:45:03 crc kubenswrapper[4676]: I0124 00:45:03.972423 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486880-7srrg"] Jan 24 00:45:04 crc kubenswrapper[4676]: I0124 00:45:04.280060 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eabf562-d289-4685-8ee5-ed1525930d19" path="/var/lib/kubelet/pods/1eabf562-d289-4685-8ee5-ed1525930d19/volumes" Jan 24 00:45:12 crc kubenswrapper[4676]: I0124 00:45:12.255793 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:45:12 crc kubenswrapper[4676]: E0124 00:45:12.256898 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:45:24 crc kubenswrapper[4676]: I0124 00:45:24.256967 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:45:24 crc kubenswrapper[4676]: E0124 00:45:24.257993 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:45:36 crc kubenswrapper[4676]: I0124 00:45:36.603291 4676 generic.go:334] "Generic (PLEG): container finished" podID="2cafe497-da96-4b39-bec2-1ec54f859303" containerID="9bb06db0d0e1ec2c2b37d9e152501f764e96d00ae7a309b970bb6c0a4f2a01db" exitCode=0 Jan 24 00:45:36 crc kubenswrapper[4676]: I0124 00:45:36.603603 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" event={"ID":"2cafe497-da96-4b39-bec2-1ec54f859303","Type":"ContainerDied","Data":"9bb06db0d0e1ec2c2b37d9e152501f764e96d00ae7a309b970bb6c0a4f2a01db"} Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.043551 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.152884 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-combined-ca-bundle\") pod \"2cafe497-da96-4b39-bec2-1ec54f859303\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.153319 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h6gz\" (UniqueName: \"kubernetes.io/projected/2cafe497-da96-4b39-bec2-1ec54f859303-kube-api-access-9h6gz\") pod \"2cafe497-da96-4b39-bec2-1ec54f859303\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.153375 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-ssh-key-openstack-edpm-ipam\") pod \"2cafe497-da96-4b39-bec2-1ec54f859303\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.153435 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-secret-0\") pod \"2cafe497-da96-4b39-bec2-1ec54f859303\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.153636 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-inventory\") pod \"2cafe497-da96-4b39-bec2-1ec54f859303\" (UID: \"2cafe497-da96-4b39-bec2-1ec54f859303\") " Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.162665 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2cafe497-da96-4b39-bec2-1ec54f859303" (UID: "2cafe497-da96-4b39-bec2-1ec54f859303"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.163854 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cafe497-da96-4b39-bec2-1ec54f859303-kube-api-access-9h6gz" (OuterVolumeSpecName: "kube-api-access-9h6gz") pod "2cafe497-da96-4b39-bec2-1ec54f859303" (UID: "2cafe497-da96-4b39-bec2-1ec54f859303"). InnerVolumeSpecName "kube-api-access-9h6gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.186874 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-inventory" (OuterVolumeSpecName: "inventory") pod "2cafe497-da96-4b39-bec2-1ec54f859303" (UID: "2cafe497-da96-4b39-bec2-1ec54f859303"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.190115 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2cafe497-da96-4b39-bec2-1ec54f859303" (UID: "2cafe497-da96-4b39-bec2-1ec54f859303"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.199087 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cafe497-da96-4b39-bec2-1ec54f859303" (UID: "2cafe497-da96-4b39-bec2-1ec54f859303"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.256592 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.256644 4676 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.256663 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.256686 4676 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cafe497-da96-4b39-bec2-1ec54f859303-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.256706 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h6gz\" (UniqueName: \"kubernetes.io/projected/2cafe497-da96-4b39-bec2-1ec54f859303-kube-api-access-9h6gz\") on node \"crc\" DevicePath \"\"" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.631123 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" event={"ID":"2cafe497-da96-4b39-bec2-1ec54f859303","Type":"ContainerDied","Data":"1547e9b9c1a3265026b8579e925dfd46811b1b86e0e0cebd473a29138f45bc55"} Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.631176 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1547e9b9c1a3265026b8579e925dfd46811b1b86e0e0cebd473a29138f45bc55" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.631551 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ddk99" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.737156 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g"] Jan 24 00:45:38 crc kubenswrapper[4676]: E0124 00:45:38.737524 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cafe497-da96-4b39-bec2-1ec54f859303" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.737541 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cafe497-da96-4b39-bec2-1ec54f859303" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 00:45:38 crc kubenswrapper[4676]: E0124 00:45:38.737549 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2829b1-9d20-469f-8c25-475f980b00f1" containerName="collect-profiles" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.737556 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2829b1-9d20-469f-8c25-475f980b00f1" containerName="collect-profiles" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.737714 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2829b1-9d20-469f-8c25-475f980b00f1" containerName="collect-profiles" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.737732 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cafe497-da96-4b39-bec2-1ec54f859303" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.738253 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.741058 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.741482 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.741907 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.742212 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.743338 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.743818 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.753679 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.765210 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g"] Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.876910 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.876959 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6689\" (UniqueName: \"kubernetes.io/projected/75cae94d-b818-4f92-b42d-fd8cec63a657-kube-api-access-g6689\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.876981 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.877006 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.877108 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.877178 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.877199 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.877403 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.877468 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.979752 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.979806 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.979888 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.979929 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6689\" (UniqueName: \"kubernetes.io/projected/75cae94d-b818-4f92-b42d-fd8cec63a657-kube-api-access-g6689\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.979946 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.979970 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.979998 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.980058 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.980075 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.981310 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.985666 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.985822 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:38 crc kubenswrapper[4676]: I0124 00:45:38.993823 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.000153 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.001714 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.002175 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.006896 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.010057 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6689\" (UniqueName: \"kubernetes.io/projected/75cae94d-b818-4f92-b42d-fd8cec63a657-kube-api-access-g6689\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5cl2g\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.059268 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.255830 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:45:39 crc kubenswrapper[4676]: E0124 00:45:39.256505 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.447889 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g"] Jan 24 00:45:39 crc kubenswrapper[4676]: W0124 00:45:39.460572 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75cae94d_b818_4f92_b42d_fd8cec63a657.slice/crio-7a1a57426601586865e4d5bc7b431f54c628df289eb8caca5bd5436e21faee21 WatchSource:0}: Error finding container 7a1a57426601586865e4d5bc7b431f54c628df289eb8caca5bd5436e21faee21: Status 404 returned error can't find the container with id 7a1a57426601586865e4d5bc7b431f54c628df289eb8caca5bd5436e21faee21 Jan 24 00:45:39 crc kubenswrapper[4676]: I0124 00:45:39.641756 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" event={"ID":"75cae94d-b818-4f92-b42d-fd8cec63a657","Type":"ContainerStarted","Data":"7a1a57426601586865e4d5bc7b431f54c628df289eb8caca5bd5436e21faee21"} Jan 24 00:45:40 crc kubenswrapper[4676]: I0124 00:45:40.654128 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" event={"ID":"75cae94d-b818-4f92-b42d-fd8cec63a657","Type":"ContainerStarted","Data":"9049acca3eaa19729de3abedb765017e8466c178a888c60c1ddeb96f4f0c0556"} Jan 24 00:45:40 crc kubenswrapper[4676]: I0124 00:45:40.704519 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" podStartSLOduration=2.173199826 podStartE2EDuration="2.704502101s" podCreationTimestamp="2026-01-24 00:45:38 +0000 UTC" firstStartedPulling="2026-01-24 00:45:39.463773204 +0000 UTC m=+2523.493744215" lastFinishedPulling="2026-01-24 00:45:39.995075489 +0000 UTC m=+2524.025046490" observedRunningTime="2026-01-24 00:45:40.69861277 +0000 UTC m=+2524.728583801" watchObservedRunningTime="2026-01-24 00:45:40.704502101 +0000 UTC m=+2524.734473102" Jan 24 00:45:42 crc kubenswrapper[4676]: I0124 00:45:42.712727 4676 scope.go:117] "RemoveContainer" containerID="572cf18d919a142df3a87fe72606c221700742f64f3b5868d79f863d9965c925" Jan 24 00:45:52 crc kubenswrapper[4676]: I0124 00:45:52.256253 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:45:52 crc kubenswrapper[4676]: E0124 00:45:52.257184 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.295535 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9gbql"] Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.298415 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.321267 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gbql"] Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.442734 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsct\" (UniqueName: \"kubernetes.io/projected/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-kube-api-access-mpsct\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.442800 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-utilities\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.442821 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-catalog-content\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.544730 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpsct\" (UniqueName: \"kubernetes.io/projected/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-kube-api-access-mpsct\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.544793 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-utilities\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.544825 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-catalog-content\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.545328 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-utilities\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.545486 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-catalog-content\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.564689 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpsct\" (UniqueName: \"kubernetes.io/projected/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-kube-api-access-mpsct\") pod \"community-operators-9gbql\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:00 crc kubenswrapper[4676]: I0124 00:46:00.621558 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:01 crc kubenswrapper[4676]: I0124 00:46:01.156857 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gbql"] Jan 24 00:46:01 crc kubenswrapper[4676]: I0124 00:46:01.879409 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gbql" event={"ID":"20acbbf0-6ea7-4138-9b68-4ab7b31071e5","Type":"ContainerDied","Data":"e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e"} Jan 24 00:46:01 crc kubenswrapper[4676]: I0124 00:46:01.879347 4676 generic.go:334] "Generic (PLEG): container finished" podID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerID="e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e" exitCode=0 Jan 24 00:46:01 crc kubenswrapper[4676]: I0124 00:46:01.879493 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gbql" event={"ID":"20acbbf0-6ea7-4138-9b68-4ab7b31071e5","Type":"ContainerStarted","Data":"2f61a2bc6b2a611ed6a3f9e74e1dcca47d070d91d158983ca73e39445419936f"} Jan 24 00:46:01 crc kubenswrapper[4676]: I0124 00:46:01.882367 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:46:02 crc kubenswrapper[4676]: I0124 00:46:02.890356 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gbql" event={"ID":"20acbbf0-6ea7-4138-9b68-4ab7b31071e5","Type":"ContainerStarted","Data":"01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3"} Jan 24 00:46:04 crc kubenswrapper[4676]: I0124 00:46:04.011523 4676 generic.go:334] "Generic (PLEG): container finished" podID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerID="01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3" exitCode=0 Jan 24 00:46:04 crc kubenswrapper[4676]: I0124 00:46:04.011571 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gbql" event={"ID":"20acbbf0-6ea7-4138-9b68-4ab7b31071e5","Type":"ContainerDied","Data":"01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3"} Jan 24 00:46:05 crc kubenswrapper[4676]: I0124 00:46:05.022538 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gbql" event={"ID":"20acbbf0-6ea7-4138-9b68-4ab7b31071e5","Type":"ContainerStarted","Data":"727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893"} Jan 24 00:46:05 crc kubenswrapper[4676]: I0124 00:46:05.075537 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9gbql" podStartSLOduration=2.556075646 podStartE2EDuration="5.075512843s" podCreationTimestamp="2026-01-24 00:46:00 +0000 UTC" firstStartedPulling="2026-01-24 00:46:01.881860098 +0000 UTC m=+2545.911831099" lastFinishedPulling="2026-01-24 00:46:04.401297295 +0000 UTC m=+2548.431268296" observedRunningTime="2026-01-24 00:46:05.069645802 +0000 UTC m=+2549.099616843" watchObservedRunningTime="2026-01-24 00:46:05.075512843 +0000 UTC m=+2549.105483854" Jan 24 00:46:06 crc kubenswrapper[4676]: I0124 00:46:06.261706 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:46:06 crc kubenswrapper[4676]: E0124 00:46:06.262178 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:46:10 crc kubenswrapper[4676]: I0124 00:46:10.622479 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:10 crc kubenswrapper[4676]: I0124 00:46:10.622866 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:10 crc kubenswrapper[4676]: I0124 00:46:10.698500 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:11 crc kubenswrapper[4676]: I0124 00:46:11.136502 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:11 crc kubenswrapper[4676]: I0124 00:46:11.202485 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gbql"] Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.107874 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9gbql" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerName="registry-server" containerID="cri-o://727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893" gracePeriod=2 Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.573979 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.707072 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpsct\" (UniqueName: \"kubernetes.io/projected/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-kube-api-access-mpsct\") pod \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.707465 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-utilities\") pod \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.707558 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-catalog-content\") pod \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\" (UID: \"20acbbf0-6ea7-4138-9b68-4ab7b31071e5\") " Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.710217 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-utilities" (OuterVolumeSpecName: "utilities") pod "20acbbf0-6ea7-4138-9b68-4ab7b31071e5" (UID: "20acbbf0-6ea7-4138-9b68-4ab7b31071e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.717684 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-kube-api-access-mpsct" (OuterVolumeSpecName: "kube-api-access-mpsct") pod "20acbbf0-6ea7-4138-9b68-4ab7b31071e5" (UID: "20acbbf0-6ea7-4138-9b68-4ab7b31071e5"). InnerVolumeSpecName "kube-api-access-mpsct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.791147 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20acbbf0-6ea7-4138-9b68-4ab7b31071e5" (UID: "20acbbf0-6ea7-4138-9b68-4ab7b31071e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.809966 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpsct\" (UniqueName: \"kubernetes.io/projected/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-kube-api-access-mpsct\") on node \"crc\" DevicePath \"\"" Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.810003 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:46:13 crc kubenswrapper[4676]: I0124 00:46:13.810015 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20acbbf0-6ea7-4138-9b68-4ab7b31071e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.119669 4676 generic.go:334] "Generic (PLEG): container finished" podID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerID="727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893" exitCode=0 Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.119733 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gbql" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.119750 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gbql" event={"ID":"20acbbf0-6ea7-4138-9b68-4ab7b31071e5","Type":"ContainerDied","Data":"727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893"} Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.119812 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gbql" event={"ID":"20acbbf0-6ea7-4138-9b68-4ab7b31071e5","Type":"ContainerDied","Data":"2f61a2bc6b2a611ed6a3f9e74e1dcca47d070d91d158983ca73e39445419936f"} Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.119836 4676 scope.go:117] "RemoveContainer" containerID="727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.152515 4676 scope.go:117] "RemoveContainer" containerID="01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.189423 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gbql"] Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.220767 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9gbql"] Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.224098 4676 scope.go:117] "RemoveContainer" containerID="e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.251221 4676 scope.go:117] "RemoveContainer" containerID="727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893" Jan 24 00:46:14 crc kubenswrapper[4676]: E0124 00:46:14.252803 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893\": container with ID starting with 727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893 not found: ID does not exist" containerID="727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.252833 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893"} err="failed to get container status \"727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893\": rpc error: code = NotFound desc = could not find container \"727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893\": container with ID starting with 727704ff4f53ba7b13f4b78ff613c85e8daf69033f4490df8504f6bfe1a82893 not found: ID does not exist" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.252858 4676 scope.go:117] "RemoveContainer" containerID="01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3" Jan 24 00:46:14 crc kubenswrapper[4676]: E0124 00:46:14.253211 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3\": container with ID starting with 01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3 not found: ID does not exist" containerID="01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.253233 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3"} err="failed to get container status \"01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3\": rpc error: code = NotFound desc = could not find container \"01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3\": container with ID starting with 01b3ed872259c694bfc2d43f160db2bacf44f5bfd0e78056095d41e8770fd6f3 not found: ID does not exist" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.253247 4676 scope.go:117] "RemoveContainer" containerID="e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e" Jan 24 00:46:14 crc kubenswrapper[4676]: E0124 00:46:14.253542 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e\": container with ID starting with e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e not found: ID does not exist" containerID="e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.253560 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e"} err="failed to get container status \"e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e\": rpc error: code = NotFound desc = could not find container \"e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e\": container with ID starting with e8ecd8886191eaccb5939178d7358f3825d6ba6249f300176017ef377067634e not found: ID does not exist" Jan 24 00:46:14 crc kubenswrapper[4676]: I0124 00:46:14.266904 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" path="/var/lib/kubelet/pods/20acbbf0-6ea7-4138-9b68-4ab7b31071e5/volumes" Jan 24 00:46:19 crc kubenswrapper[4676]: I0124 00:46:19.257352 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:46:19 crc kubenswrapper[4676]: E0124 00:46:19.258344 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:46:30 crc kubenswrapper[4676]: I0124 00:46:30.257061 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:46:30 crc kubenswrapper[4676]: E0124 00:46:30.258153 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:46:42 crc kubenswrapper[4676]: I0124 00:46:42.256155 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:46:43 crc kubenswrapper[4676]: I0124 00:46:43.423151 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"50556dab9c423476f3e768a6107a078181f27e372f40fe19e80cac55ab001315"} Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.447729 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8bhxt"] Jan 24 00:47:50 crc kubenswrapper[4676]: E0124 00:47:50.448611 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerName="extract-utilities" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.448628 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerName="extract-utilities" Jan 24 00:47:50 crc kubenswrapper[4676]: E0124 00:47:50.448663 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerName="extract-content" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.448673 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerName="extract-content" Jan 24 00:47:50 crc kubenswrapper[4676]: E0124 00:47:50.448693 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerName="registry-server" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.448702 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerName="registry-server" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.448916 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="20acbbf0-6ea7-4138-9b68-4ab7b31071e5" containerName="registry-server" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.453491 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.472222 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8bhxt"] Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.589356 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-utilities\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.589432 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-catalog-content\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.589846 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpkj4\" (UniqueName: \"kubernetes.io/projected/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-kube-api-access-rpkj4\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.691831 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpkj4\" (UniqueName: \"kubernetes.io/projected/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-kube-api-access-rpkj4\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.691960 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-utilities\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.692007 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-catalog-content\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.692705 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-catalog-content\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.692831 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-utilities\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.716734 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpkj4\" (UniqueName: \"kubernetes.io/projected/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-kube-api-access-rpkj4\") pod \"redhat-operators-8bhxt\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:50 crc kubenswrapper[4676]: I0124 00:47:50.773436 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:47:51 crc kubenswrapper[4676]: I0124 00:47:51.279760 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8bhxt"] Jan 24 00:47:52 crc kubenswrapper[4676]: I0124 00:47:52.116049 4676 generic.go:334] "Generic (PLEG): container finished" podID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerID="b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08" exitCode=0 Jan 24 00:47:52 crc kubenswrapper[4676]: I0124 00:47:52.116317 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bhxt" event={"ID":"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846","Type":"ContainerDied","Data":"b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08"} Jan 24 00:47:52 crc kubenswrapper[4676]: I0124 00:47:52.116343 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bhxt" event={"ID":"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846","Type":"ContainerStarted","Data":"b2d4b4da885e3e4644df949bcafb64ca58f900ed27867e562ecef9983b65ab9e"} Jan 24 00:47:54 crc kubenswrapper[4676]: I0124 00:47:54.132205 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bhxt" event={"ID":"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846","Type":"ContainerStarted","Data":"2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7"} Jan 24 00:47:57 crc kubenswrapper[4676]: I0124 00:47:57.157639 4676 generic.go:334] "Generic (PLEG): container finished" podID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerID="2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7" exitCode=0 Jan 24 00:47:57 crc kubenswrapper[4676]: I0124 00:47:57.157688 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bhxt" event={"ID":"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846","Type":"ContainerDied","Data":"2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7"} Jan 24 00:47:58 crc kubenswrapper[4676]: I0124 00:47:58.168897 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bhxt" event={"ID":"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846","Type":"ContainerStarted","Data":"986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98"} Jan 24 00:47:59 crc kubenswrapper[4676]: I0124 00:47:59.196033 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8bhxt" podStartSLOduration=3.621391472 podStartE2EDuration="9.196008987s" podCreationTimestamp="2026-01-24 00:47:50 +0000 UTC" firstStartedPulling="2026-01-24 00:47:52.119039181 +0000 UTC m=+2656.149010182" lastFinishedPulling="2026-01-24 00:47:57.693656696 +0000 UTC m=+2661.723627697" observedRunningTime="2026-01-24 00:47:59.193329194 +0000 UTC m=+2663.223300195" watchObservedRunningTime="2026-01-24 00:47:59.196008987 +0000 UTC m=+2663.225980008" Jan 24 00:48:00 crc kubenswrapper[4676]: I0124 00:48:00.774046 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:48:00 crc kubenswrapper[4676]: I0124 00:48:00.774091 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:48:01 crc kubenswrapper[4676]: I0124 00:48:01.822875 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bhxt" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="registry-server" probeResult="failure" output=< Jan 24 00:48:01 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:48:01 crc kubenswrapper[4676]: > Jan 24 00:48:10 crc kubenswrapper[4676]: I0124 00:48:10.850149 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:48:10 crc kubenswrapper[4676]: I0124 00:48:10.924466 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:48:11 crc kubenswrapper[4676]: I0124 00:48:11.095489 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8bhxt"] Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.315813 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8bhxt" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="registry-server" containerID="cri-o://986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98" gracePeriod=2 Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.766112 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.887200 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-utilities\") pod \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.887298 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpkj4\" (UniqueName: \"kubernetes.io/projected/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-kube-api-access-rpkj4\") pod \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.887398 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-catalog-content\") pod \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\" (UID: \"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846\") " Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.890114 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-utilities" (OuterVolumeSpecName: "utilities") pod "2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" (UID: "2f8aa49c-c56e-48e0-a8e2-e7a8777d4846"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.897993 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-kube-api-access-rpkj4" (OuterVolumeSpecName: "kube-api-access-rpkj4") pod "2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" (UID: "2f8aa49c-c56e-48e0-a8e2-e7a8777d4846"). InnerVolumeSpecName "kube-api-access-rpkj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.989768 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:12 crc kubenswrapper[4676]: I0124 00:48:12.989805 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpkj4\" (UniqueName: \"kubernetes.io/projected/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-kube-api-access-rpkj4\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.019026 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" (UID: "2f8aa49c-c56e-48e0-a8e2-e7a8777d4846"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.091798 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.331250 4676 generic.go:334] "Generic (PLEG): container finished" podID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerID="986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98" exitCode=0 Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.331340 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bhxt" event={"ID":"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846","Type":"ContainerDied","Data":"986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98"} Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.331435 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bhxt" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.332082 4676 scope.go:117] "RemoveContainer" containerID="986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.331903 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bhxt" event={"ID":"2f8aa49c-c56e-48e0-a8e2-e7a8777d4846","Type":"ContainerDied","Data":"b2d4b4da885e3e4644df949bcafb64ca58f900ed27867e562ecef9983b65ab9e"} Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.386759 4676 scope.go:117] "RemoveContainer" containerID="2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.409110 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8bhxt"] Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.423106 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8bhxt"] Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.424898 4676 scope.go:117] "RemoveContainer" containerID="b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.483134 4676 scope.go:117] "RemoveContainer" containerID="986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98" Jan 24 00:48:13 crc kubenswrapper[4676]: E0124 00:48:13.484095 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98\": container with ID starting with 986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98 not found: ID does not exist" containerID="986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.484157 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98"} err="failed to get container status \"986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98\": rpc error: code = NotFound desc = could not find container \"986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98\": container with ID starting with 986156910a9642cbf653dabbf5578f7a1410cb78bfa71ced2b3ac3f6b7af7a98 not found: ID does not exist" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.484195 4676 scope.go:117] "RemoveContainer" containerID="2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7" Jan 24 00:48:13 crc kubenswrapper[4676]: E0124 00:48:13.485464 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7\": container with ID starting with 2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7 not found: ID does not exist" containerID="2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.485697 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7"} err="failed to get container status \"2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7\": rpc error: code = NotFound desc = could not find container \"2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7\": container with ID starting with 2b74f2bb198b1e49d3579f7e148aa0f1446f32a9a24a52cd446f297186fa53a7 not found: ID does not exist" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.485907 4676 scope.go:117] "RemoveContainer" containerID="b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08" Jan 24 00:48:13 crc kubenswrapper[4676]: E0124 00:48:13.486595 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08\": container with ID starting with b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08 not found: ID does not exist" containerID="b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08" Jan 24 00:48:13 crc kubenswrapper[4676]: I0124 00:48:13.486658 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08"} err="failed to get container status \"b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08\": rpc error: code = NotFound desc = could not find container \"b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08\": container with ID starting with b5edc96625c9acbc57cac8015a5d029ceee5b34c3889182c899b340e2f68ab08 not found: ID does not exist" Jan 24 00:48:14 crc kubenswrapper[4676]: I0124 00:48:14.269281 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" path="/var/lib/kubelet/pods/2f8aa49c-c56e-48e0-a8e2-e7a8777d4846/volumes" Jan 24 00:48:33 crc kubenswrapper[4676]: I0124 00:48:33.547582 4676 generic.go:334] "Generic (PLEG): container finished" podID="75cae94d-b818-4f92-b42d-fd8cec63a657" containerID="9049acca3eaa19729de3abedb765017e8466c178a888c60c1ddeb96f4f0c0556" exitCode=0 Jan 24 00:48:33 crc kubenswrapper[4676]: I0124 00:48:33.547813 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" event={"ID":"75cae94d-b818-4f92-b42d-fd8cec63a657","Type":"ContainerDied","Data":"9049acca3eaa19729de3abedb765017e8466c178a888c60c1ddeb96f4f0c0556"} Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.016335 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.192006 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-combined-ca-bundle\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.192280 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-0\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.192443 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-ssh-key-openstack-edpm-ipam\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.192588 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-1\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.192681 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-1\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.192771 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-inventory\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.192852 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6689\" (UniqueName: \"kubernetes.io/projected/75cae94d-b818-4f92-b42d-fd8cec63a657-kube-api-access-g6689\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.193001 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-extra-config-0\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.193124 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-0\") pod \"75cae94d-b818-4f92-b42d-fd8cec63a657\" (UID: \"75cae94d-b818-4f92-b42d-fd8cec63a657\") " Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.213065 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cae94d-b818-4f92-b42d-fd8cec63a657-kube-api-access-g6689" (OuterVolumeSpecName: "kube-api-access-g6689") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "kube-api-access-g6689". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.213212 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.227343 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.231756 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.232121 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.233936 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.254518 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.262770 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-inventory" (OuterVolumeSpecName: "inventory") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.270116 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "75cae94d-b818-4f92-b42d-fd8cec63a657" (UID: "75cae94d-b818-4f92-b42d-fd8cec63a657"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296425 4676 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296455 4676 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296464 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296472 4676 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296481 4676 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296490 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296498 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6689\" (UniqueName: \"kubernetes.io/projected/75cae94d-b818-4f92-b42d-fd8cec63a657-kube-api-access-g6689\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296508 4676 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.296516 4676 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/75cae94d-b818-4f92-b42d-fd8cec63a657-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.562395 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" event={"ID":"75cae94d-b818-4f92-b42d-fd8cec63a657","Type":"ContainerDied","Data":"7a1a57426601586865e4d5bc7b431f54c628df289eb8caca5bd5436e21faee21"} Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.562647 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a1a57426601586865e4d5bc7b431f54c628df289eb8caca5bd5436e21faee21" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.562429 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5cl2g" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.753804 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv"] Jan 24 00:48:35 crc kubenswrapper[4676]: E0124 00:48:35.754258 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="registry-server" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.754280 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="registry-server" Jan 24 00:48:35 crc kubenswrapper[4676]: E0124 00:48:35.754321 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="extract-content" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.754330 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="extract-content" Jan 24 00:48:35 crc kubenswrapper[4676]: E0124 00:48:35.754353 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75cae94d-b818-4f92-b42d-fd8cec63a657" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.754362 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="75cae94d-b818-4f92-b42d-fd8cec63a657" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 00:48:35 crc kubenswrapper[4676]: E0124 00:48:35.754397 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="extract-utilities" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.754406 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="extract-utilities" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.754641 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="75cae94d-b818-4f92-b42d-fd8cec63a657" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.754671 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8aa49c-c56e-48e0-a8e2-e7a8777d4846" containerName="registry-server" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.755668 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.758149 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.758158 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.758162 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.758449 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5vl7p" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.758616 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.765876 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv"] Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.909815 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.910089 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.910181 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcmjf\" (UniqueName: \"kubernetes.io/projected/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-kube-api-access-bcmjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.910426 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.910531 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.910613 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:35 crc kubenswrapper[4676]: I0124 00:48:35.910927 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.012349 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.012620 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.013060 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcmjf\" (UniqueName: \"kubernetes.io/projected/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-kube-api-access-bcmjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.013195 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.013273 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.013353 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.013553 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.017746 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.017958 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.018586 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.018831 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.018923 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.020288 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.032767 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcmjf\" (UniqueName: \"kubernetes.io/projected/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-kube-api-access-bcmjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.073520 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:48:36 crc kubenswrapper[4676]: I0124 00:48:36.620832 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv"] Jan 24 00:48:37 crc kubenswrapper[4676]: I0124 00:48:37.241413 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 00:48:37 crc kubenswrapper[4676]: I0124 00:48:37.579049 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" event={"ID":"7ef31551-e4ed-48d0-a4d6-f9c2fb515966","Type":"ContainerStarted","Data":"49215d2f6ff846e5d4f5385e57b7216b7767e35f242d154ba2d3618223caeae2"} Jan 24 00:48:37 crc kubenswrapper[4676]: I0124 00:48:37.579350 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" event={"ID":"7ef31551-e4ed-48d0-a4d6-f9c2fb515966","Type":"ContainerStarted","Data":"8044fbd530591f1f5965e015b66ccd01325bde8aa28d26f8d37467c9160466d7"} Jan 24 00:48:37 crc kubenswrapper[4676]: I0124 00:48:37.601296 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" podStartSLOduration=1.985670659 podStartE2EDuration="2.601279921s" podCreationTimestamp="2026-01-24 00:48:35 +0000 UTC" firstStartedPulling="2026-01-24 00:48:36.621587448 +0000 UTC m=+2700.651558469" lastFinishedPulling="2026-01-24 00:48:37.23719669 +0000 UTC m=+2701.267167731" observedRunningTime="2026-01-24 00:48:37.598397722 +0000 UTC m=+2701.628368723" watchObservedRunningTime="2026-01-24 00:48:37.601279921 +0000 UTC m=+2701.631250922" Jan 24 00:49:09 crc kubenswrapper[4676]: I0124 00:49:09.364327 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:49:09 crc kubenswrapper[4676]: I0124 00:49:09.364908 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:49:39 crc kubenswrapper[4676]: I0124 00:49:39.364314 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:49:39 crc kubenswrapper[4676]: I0124 00:49:39.364820 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:50:09 crc kubenswrapper[4676]: I0124 00:50:09.364945 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:50:09 crc kubenswrapper[4676]: I0124 00:50:09.365796 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:50:09 crc kubenswrapper[4676]: I0124 00:50:09.365866 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:50:09 crc kubenswrapper[4676]: I0124 00:50:09.367256 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50556dab9c423476f3e768a6107a078181f27e372f40fe19e80cac55ab001315"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:50:09 crc kubenswrapper[4676]: I0124 00:50:09.367344 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://50556dab9c423476f3e768a6107a078181f27e372f40fe19e80cac55ab001315" gracePeriod=600 Jan 24 00:50:10 crc kubenswrapper[4676]: I0124 00:50:10.489193 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="50556dab9c423476f3e768a6107a078181f27e372f40fe19e80cac55ab001315" exitCode=0 Jan 24 00:50:10 crc kubenswrapper[4676]: I0124 00:50:10.489487 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"50556dab9c423476f3e768a6107a078181f27e372f40fe19e80cac55ab001315"} Jan 24 00:50:10 crc kubenswrapper[4676]: I0124 00:50:10.489937 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398"} Jan 24 00:50:10 crc kubenswrapper[4676]: I0124 00:50:10.489961 4676 scope.go:117] "RemoveContainer" containerID="0cc55a19ee9f58fa576e51c2d6dcb1203ed54bdb81c4ec86abf6a39092861cac" Jan 24 00:51:59 crc kubenswrapper[4676]: I0124 00:51:59.640627 4676 generic.go:334] "Generic (PLEG): container finished" podID="7ef31551-e4ed-48d0-a4d6-f9c2fb515966" containerID="49215d2f6ff846e5d4f5385e57b7216b7767e35f242d154ba2d3618223caeae2" exitCode=0 Jan 24 00:51:59 crc kubenswrapper[4676]: I0124 00:51:59.640836 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" event={"ID":"7ef31551-e4ed-48d0-a4d6-f9c2fb515966","Type":"ContainerDied","Data":"49215d2f6ff846e5d4f5385e57b7216b7767e35f242d154ba2d3618223caeae2"} Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.127670 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.268698 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-inventory\") pod \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.269089 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-2\") pod \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.269136 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-0\") pod \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.269236 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-telemetry-combined-ca-bundle\") pod \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.269363 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-1\") pod \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.269458 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ssh-key-openstack-edpm-ipam\") pod \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.270492 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcmjf\" (UniqueName: \"kubernetes.io/projected/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-kube-api-access-bcmjf\") pod \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\" (UID: \"7ef31551-e4ed-48d0-a4d6-f9c2fb515966\") " Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.277112 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-kube-api-access-bcmjf" (OuterVolumeSpecName: "kube-api-access-bcmjf") pod "7ef31551-e4ed-48d0-a4d6-f9c2fb515966" (UID: "7ef31551-e4ed-48d0-a4d6-f9c2fb515966"). InnerVolumeSpecName "kube-api-access-bcmjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.277504 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7ef31551-e4ed-48d0-a4d6-f9c2fb515966" (UID: "7ef31551-e4ed-48d0-a4d6-f9c2fb515966"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.298355 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ef31551-e4ed-48d0-a4d6-f9c2fb515966" (UID: "7ef31551-e4ed-48d0-a4d6-f9c2fb515966"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.303851 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-inventory" (OuterVolumeSpecName: "inventory") pod "7ef31551-e4ed-48d0-a4d6-f9c2fb515966" (UID: "7ef31551-e4ed-48d0-a4d6-f9c2fb515966"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.308648 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7ef31551-e4ed-48d0-a4d6-f9c2fb515966" (UID: "7ef31551-e4ed-48d0-a4d6-f9c2fb515966"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.312554 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7ef31551-e4ed-48d0-a4d6-f9c2fb515966" (UID: "7ef31551-e4ed-48d0-a4d6-f9c2fb515966"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.327807 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7ef31551-e4ed-48d0-a4d6-f9c2fb515966" (UID: "7ef31551-e4ed-48d0-a4d6-f9c2fb515966"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.373129 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.373169 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcmjf\" (UniqueName: \"kubernetes.io/projected/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-kube-api-access-bcmjf\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.373183 4676 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.373195 4676 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.373207 4676 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.373223 4676 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.373236 4676 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7ef31551-e4ed-48d0-a4d6-f9c2fb515966-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.663786 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" event={"ID":"7ef31551-e4ed-48d0-a4d6-f9c2fb515966","Type":"ContainerDied","Data":"8044fbd530591f1f5965e015b66ccd01325bde8aa28d26f8d37467c9160466d7"} Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.663830 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8044fbd530591f1f5965e015b66ccd01325bde8aa28d26f8d37467c9160466d7" Jan 24 00:52:01 crc kubenswrapper[4676]: I0124 00:52:01.663884 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv" Jan 24 00:52:09 crc kubenswrapper[4676]: I0124 00:52:09.364836 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:52:09 crc kubenswrapper[4676]: I0124 00:52:09.365587 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.142848 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4hpk9"] Jan 24 00:52:26 crc kubenswrapper[4676]: E0124 00:52:26.144875 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef31551-e4ed-48d0-a4d6-f9c2fb515966" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.144994 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef31551-e4ed-48d0-a4d6-f9c2fb515966" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.145284 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef31551-e4ed-48d0-a4d6-f9c2fb515966" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.147289 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.168197 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hpk9"] Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.301750 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-utilities\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.301936 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-catalog-content\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.301965 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct2bk\" (UniqueName: \"kubernetes.io/projected/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-kube-api-access-ct2bk\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.403017 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-catalog-content\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.403068 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct2bk\" (UniqueName: \"kubernetes.io/projected/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-kube-api-access-ct2bk\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.403156 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-utilities\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.404921 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-catalog-content\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.404942 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-utilities\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.425357 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct2bk\" (UniqueName: \"kubernetes.io/projected/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-kube-api-access-ct2bk\") pod \"redhat-marketplace-4hpk9\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.465867 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:26 crc kubenswrapper[4676]: I0124 00:52:26.981658 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hpk9"] Jan 24 00:52:26 crc kubenswrapper[4676]: W0124 00:52:26.987616 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod521707d5_eb8e_4c5c_b171_2a001bcdf0a1.slice/crio-b819727379ac8360ec1c277c305e34faeb524ceba12d9ad37312d65500a3c018 WatchSource:0}: Error finding container b819727379ac8360ec1c277c305e34faeb524ceba12d9ad37312d65500a3c018: Status 404 returned error can't find the container with id b819727379ac8360ec1c277c305e34faeb524ceba12d9ad37312d65500a3c018 Jan 24 00:52:27 crc kubenswrapper[4676]: I0124 00:52:27.921433 4676 generic.go:334] "Generic (PLEG): container finished" podID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerID="3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b" exitCode=0 Jan 24 00:52:27 crc kubenswrapper[4676]: I0124 00:52:27.921573 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hpk9" event={"ID":"521707d5-eb8e-4c5c-b171-2a001bcdf0a1","Type":"ContainerDied","Data":"3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b"} Jan 24 00:52:27 crc kubenswrapper[4676]: I0124 00:52:27.926493 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hpk9" event={"ID":"521707d5-eb8e-4c5c-b171-2a001bcdf0a1","Type":"ContainerStarted","Data":"b819727379ac8360ec1c277c305e34faeb524ceba12d9ad37312d65500a3c018"} Jan 24 00:52:27 crc kubenswrapper[4676]: I0124 00:52:27.926552 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:52:28 crc kubenswrapper[4676]: I0124 00:52:28.936241 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hpk9" event={"ID":"521707d5-eb8e-4c5c-b171-2a001bcdf0a1","Type":"ContainerStarted","Data":"88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c"} Jan 24 00:52:29 crc kubenswrapper[4676]: I0124 00:52:29.951425 4676 generic.go:334] "Generic (PLEG): container finished" podID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerID="88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c" exitCode=0 Jan 24 00:52:29 crc kubenswrapper[4676]: I0124 00:52:29.951486 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hpk9" event={"ID":"521707d5-eb8e-4c5c-b171-2a001bcdf0a1","Type":"ContainerDied","Data":"88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c"} Jan 24 00:52:30 crc kubenswrapper[4676]: I0124 00:52:30.967495 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hpk9" event={"ID":"521707d5-eb8e-4c5c-b171-2a001bcdf0a1","Type":"ContainerStarted","Data":"e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d"} Jan 24 00:52:30 crc kubenswrapper[4676]: I0124 00:52:30.996624 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4hpk9" podStartSLOduration=2.433693255 podStartE2EDuration="4.996609787s" podCreationTimestamp="2026-01-24 00:52:26 +0000 UTC" firstStartedPulling="2026-01-24 00:52:27.924283229 +0000 UTC m=+2931.954254270" lastFinishedPulling="2026-01-24 00:52:30.487199801 +0000 UTC m=+2934.517170802" observedRunningTime="2026-01-24 00:52:30.99146194 +0000 UTC m=+2935.021432971" watchObservedRunningTime="2026-01-24 00:52:30.996609787 +0000 UTC m=+2935.026580778" Jan 24 00:52:36 crc kubenswrapper[4676]: I0124 00:52:36.466459 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:36 crc kubenswrapper[4676]: I0124 00:52:36.467101 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:36 crc kubenswrapper[4676]: I0124 00:52:36.539248 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:37 crc kubenswrapper[4676]: I0124 00:52:37.125726 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:37 crc kubenswrapper[4676]: I0124 00:52:37.191316 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hpk9"] Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.069312 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4hpk9" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerName="registry-server" containerID="cri-o://e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d" gracePeriod=2 Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.364952 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.365011 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.595733 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.720302 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct2bk\" (UniqueName: \"kubernetes.io/projected/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-kube-api-access-ct2bk\") pod \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.720585 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-utilities\") pod \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.720660 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-catalog-content\") pod \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\" (UID: \"521707d5-eb8e-4c5c-b171-2a001bcdf0a1\") " Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.721357 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-utilities" (OuterVolumeSpecName: "utilities") pod "521707d5-eb8e-4c5c-b171-2a001bcdf0a1" (UID: "521707d5-eb8e-4c5c-b171-2a001bcdf0a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.729253 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-kube-api-access-ct2bk" (OuterVolumeSpecName: "kube-api-access-ct2bk") pod "521707d5-eb8e-4c5c-b171-2a001bcdf0a1" (UID: "521707d5-eb8e-4c5c-b171-2a001bcdf0a1"). InnerVolumeSpecName "kube-api-access-ct2bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.764402 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "521707d5-eb8e-4c5c-b171-2a001bcdf0a1" (UID: "521707d5-eb8e-4c5c-b171-2a001bcdf0a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.823978 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.824285 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:39 crc kubenswrapper[4676]: I0124 00:52:39.824566 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct2bk\" (UniqueName: \"kubernetes.io/projected/521707d5-eb8e-4c5c-b171-2a001bcdf0a1-kube-api-access-ct2bk\") on node \"crc\" DevicePath \"\"" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.084088 4676 generic.go:334] "Generic (PLEG): container finished" podID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerID="e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d" exitCode=0 Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.084151 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hpk9" event={"ID":"521707d5-eb8e-4c5c-b171-2a001bcdf0a1","Type":"ContainerDied","Data":"e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d"} Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.084183 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hpk9" event={"ID":"521707d5-eb8e-4c5c-b171-2a001bcdf0a1","Type":"ContainerDied","Data":"b819727379ac8360ec1c277c305e34faeb524ceba12d9ad37312d65500a3c018"} Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.084203 4676 scope.go:117] "RemoveContainer" containerID="e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.084460 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hpk9" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.142183 4676 scope.go:117] "RemoveContainer" containerID="88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.151238 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hpk9"] Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.176018 4676 scope.go:117] "RemoveContainer" containerID="3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.179168 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hpk9"] Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.231181 4676 scope.go:117] "RemoveContainer" containerID="e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d" Jan 24 00:52:40 crc kubenswrapper[4676]: E0124 00:52:40.232223 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d\": container with ID starting with e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d not found: ID does not exist" containerID="e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.232297 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d"} err="failed to get container status \"e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d\": rpc error: code = NotFound desc = could not find container \"e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d\": container with ID starting with e41843d9624da2bf74d0d9538e6a112fc9774350c9ef35f851f14169594e626d not found: ID does not exist" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.232345 4676 scope.go:117] "RemoveContainer" containerID="88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c" Jan 24 00:52:40 crc kubenswrapper[4676]: E0124 00:52:40.233326 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c\": container with ID starting with 88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c not found: ID does not exist" containerID="88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.233409 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c"} err="failed to get container status \"88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c\": rpc error: code = NotFound desc = could not find container \"88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c\": container with ID starting with 88d697e7ed40846c3bb522d739af626a9f6db8495cc8211d536e953fcc0b640c not found: ID does not exist" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.233462 4676 scope.go:117] "RemoveContainer" containerID="3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b" Jan 24 00:52:40 crc kubenswrapper[4676]: E0124 00:52:40.233947 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b\": container with ID starting with 3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b not found: ID does not exist" containerID="3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.234023 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b"} err="failed to get container status \"3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b\": rpc error: code = NotFound desc = could not find container \"3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b\": container with ID starting with 3bb1f921af66f44c77cccc2df88d800be23799a13d52bafb67d019e34d21813b not found: ID does not exist" Jan 24 00:52:40 crc kubenswrapper[4676]: I0124 00:52:40.274405 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" path="/var/lib/kubelet/pods/521707d5-eb8e-4c5c-b171-2a001bcdf0a1/volumes" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.927045 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 00:52:59 crc kubenswrapper[4676]: E0124 00:52:59.928589 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerName="extract-content" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.928627 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerName="extract-content" Jan 24 00:52:59 crc kubenswrapper[4676]: E0124 00:52:59.928684 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerName="registry-server" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.928705 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerName="registry-server" Jan 24 00:52:59 crc kubenswrapper[4676]: E0124 00:52:59.928743 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerName="extract-utilities" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.928764 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerName="extract-utilities" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.929209 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="521707d5-eb8e-4c5c-b171-2a001bcdf0a1" containerName="registry-server" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.930511 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.935460 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.935535 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.936585 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.939032 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-v9f6d" Jan 24 00:52:59 crc kubenswrapper[4676]: I0124 00:52:59.947610 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.062305 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.062409 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.062502 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwz6w\" (UniqueName: \"kubernetes.io/projected/3e2adf44-9053-4dcd-9d47-27910710dbc8-kube-api-access-dwz6w\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.062663 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.062796 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-config-data\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.062860 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.062933 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.063008 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.063037 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.165092 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.165550 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-config-data\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.165761 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.165998 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.166214 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.166426 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.166688 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.165764 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.166909 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.167185 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.167225 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.167227 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwz6w\" (UniqueName: \"kubernetes.io/projected/3e2adf44-9053-4dcd-9d47-27910710dbc8-kube-api-access-dwz6w\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.168280 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.171213 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-config-data\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.178224 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.179271 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.179629 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.193112 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwz6w\" (UniqueName: \"kubernetes.io/projected/3e2adf44-9053-4dcd-9d47-27910710dbc8-kube-api-access-dwz6w\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.213692 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.264666 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 00:53:00 crc kubenswrapper[4676]: I0124 00:53:00.768190 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 00:53:01 crc kubenswrapper[4676]: I0124 00:53:01.296320 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3e2adf44-9053-4dcd-9d47-27910710dbc8","Type":"ContainerStarted","Data":"7398b97beedabcbadf8e82b5f7009c22665a40a681f29f7b52fa09e6bbc8c295"} Jan 24 00:53:09 crc kubenswrapper[4676]: I0124 00:53:09.363768 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 00:53:09 crc kubenswrapper[4676]: I0124 00:53:09.364345 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 00:53:09 crc kubenswrapper[4676]: I0124 00:53:09.364386 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 00:53:09 crc kubenswrapper[4676]: I0124 00:53:09.365087 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 00:53:09 crc kubenswrapper[4676]: I0124 00:53:09.365130 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" gracePeriod=600 Jan 24 00:53:09 crc kubenswrapper[4676]: E0124 00:53:09.990882 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:53:10 crc kubenswrapper[4676]: I0124 00:53:10.383876 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" exitCode=0 Jan 24 00:53:10 crc kubenswrapper[4676]: I0124 00:53:10.383923 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398"} Jan 24 00:53:10 crc kubenswrapper[4676]: I0124 00:53:10.383954 4676 scope.go:117] "RemoveContainer" containerID="50556dab9c423476f3e768a6107a078181f27e372f40fe19e80cac55ab001315" Jan 24 00:53:10 crc kubenswrapper[4676]: I0124 00:53:10.384654 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:53:10 crc kubenswrapper[4676]: E0124 00:53:10.384881 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:53:22 crc kubenswrapper[4676]: I0124 00:53:22.255393 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:53:22 crc kubenswrapper[4676]: E0124 00:53:22.256170 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:53:33 crc kubenswrapper[4676]: E0124 00:53:33.674358 4676 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 24 00:53:33 crc kubenswrapper[4676]: E0124 00:53:33.677316 4676 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwz6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(3e2adf44-9053-4dcd-9d47-27910710dbc8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 00:53:33 crc kubenswrapper[4676]: E0124 00:53:33.678713 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="3e2adf44-9053-4dcd-9d47-27910710dbc8" Jan 24 00:53:33 crc kubenswrapper[4676]: E0124 00:53:33.707158 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="3e2adf44-9053-4dcd-9d47-27910710dbc8" Jan 24 00:53:35 crc kubenswrapper[4676]: I0124 00:53:35.256004 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:53:35 crc kubenswrapper[4676]: E0124 00:53:35.256680 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:53:47 crc kubenswrapper[4676]: I0124 00:53:47.730835 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 24 00:53:48 crc kubenswrapper[4676]: I0124 00:53:48.841904 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3e2adf44-9053-4dcd-9d47-27910710dbc8","Type":"ContainerStarted","Data":"5ce215c83e117e67d9aa3269233be260f6fe19170cac686c5414d50aaa942b7b"} Jan 24 00:53:50 crc kubenswrapper[4676]: I0124 00:53:50.257541 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:53:50 crc kubenswrapper[4676]: E0124 00:53:50.259701 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.114651 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=17.155567276 podStartE2EDuration="1m4.114635683s" podCreationTimestamp="2026-01-24 00:52:58 +0000 UTC" firstStartedPulling="2026-01-24 00:53:00.769598887 +0000 UTC m=+2964.799569898" lastFinishedPulling="2026-01-24 00:53:47.728667304 +0000 UTC m=+3011.758638305" observedRunningTime="2026-01-24 00:53:48.865678683 +0000 UTC m=+3012.895649684" watchObservedRunningTime="2026-01-24 00:54:02.114635683 +0000 UTC m=+3026.144606684" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.123322 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vsfsj"] Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.125750 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.201553 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vsfsj"] Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.292506 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34862c1d-2d18-42f8-9ef7-71d349c019fd-catalog-content\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.292565 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvh2j\" (UniqueName: \"kubernetes.io/projected/34862c1d-2d18-42f8-9ef7-71d349c019fd-kube-api-access-cvh2j\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.292584 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34862c1d-2d18-42f8-9ef7-71d349c019fd-utilities\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.395161 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34862c1d-2d18-42f8-9ef7-71d349c019fd-catalog-content\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.395236 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvh2j\" (UniqueName: \"kubernetes.io/projected/34862c1d-2d18-42f8-9ef7-71d349c019fd-kube-api-access-cvh2j\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.395255 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34862c1d-2d18-42f8-9ef7-71d349c019fd-utilities\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.395792 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34862c1d-2d18-42f8-9ef7-71d349c019fd-catalog-content\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.395829 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34862c1d-2d18-42f8-9ef7-71d349c019fd-utilities\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.412300 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvh2j\" (UniqueName: \"kubernetes.io/projected/34862c1d-2d18-42f8-9ef7-71d349c019fd-kube-api-access-cvh2j\") pod \"certified-operators-vsfsj\" (UID: \"34862c1d-2d18-42f8-9ef7-71d349c019fd\") " pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:02 crc kubenswrapper[4676]: I0124 00:54:02.450884 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:03 crc kubenswrapper[4676]: I0124 00:54:03.089147 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vsfsj"] Jan 24 00:54:04 crc kubenswrapper[4676]: I0124 00:54:03.999395 4676 generic.go:334] "Generic (PLEG): container finished" podID="34862c1d-2d18-42f8-9ef7-71d349c019fd" containerID="bd0522c40e1ed2b9f90bb6ab56e9ed807a381792f0d150f9ebf635515f27885b" exitCode=0 Jan 24 00:54:04 crc kubenswrapper[4676]: I0124 00:54:03.999521 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsfsj" event={"ID":"34862c1d-2d18-42f8-9ef7-71d349c019fd","Type":"ContainerDied","Data":"bd0522c40e1ed2b9f90bb6ab56e9ed807a381792f0d150f9ebf635515f27885b"} Jan 24 00:54:04 crc kubenswrapper[4676]: I0124 00:54:03.999736 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsfsj" event={"ID":"34862c1d-2d18-42f8-9ef7-71d349c019fd","Type":"ContainerStarted","Data":"e6f60f5616d0008aaa8f773063b8843a5b35c44a564220184ce620f19abe2326"} Jan 24 00:54:05 crc kubenswrapper[4676]: I0124 00:54:05.256357 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:54:05 crc kubenswrapper[4676]: E0124 00:54:05.256873 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:54:10 crc kubenswrapper[4676]: I0124 00:54:10.051208 4676 generic.go:334] "Generic (PLEG): container finished" podID="34862c1d-2d18-42f8-9ef7-71d349c019fd" containerID="e9f6ee47231da28f152e04b892dfc525d731c8ebb15f49b6feabec250f0eb303" exitCode=0 Jan 24 00:54:10 crc kubenswrapper[4676]: I0124 00:54:10.051339 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsfsj" event={"ID":"34862c1d-2d18-42f8-9ef7-71d349c019fd","Type":"ContainerDied","Data":"e9f6ee47231da28f152e04b892dfc525d731c8ebb15f49b6feabec250f0eb303"} Jan 24 00:54:11 crc kubenswrapper[4676]: I0124 00:54:11.069173 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsfsj" event={"ID":"34862c1d-2d18-42f8-9ef7-71d349c019fd","Type":"ContainerStarted","Data":"6f5bcd53abfa53e8152928e53d01c7a3fab65ae2fc2e34afd71588e99e2c7975"} Jan 24 00:54:11 crc kubenswrapper[4676]: I0124 00:54:11.101580 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vsfsj" podStartSLOduration=2.6596724849999998 podStartE2EDuration="9.101561149s" podCreationTimestamp="2026-01-24 00:54:02 +0000 UTC" firstStartedPulling="2026-01-24 00:54:04.002138884 +0000 UTC m=+3028.032109885" lastFinishedPulling="2026-01-24 00:54:10.444027548 +0000 UTC m=+3034.473998549" observedRunningTime="2026-01-24 00:54:11.091024685 +0000 UTC m=+3035.120995686" watchObservedRunningTime="2026-01-24 00:54:11.101561149 +0000 UTC m=+3035.131532150" Jan 24 00:54:12 crc kubenswrapper[4676]: I0124 00:54:12.725788 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:12 crc kubenswrapper[4676]: I0124 00:54:12.728219 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:12 crc kubenswrapper[4676]: I0124 00:54:12.812953 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:19 crc kubenswrapper[4676]: I0124 00:54:19.256292 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:54:19 crc kubenswrapper[4676]: E0124 00:54:19.257113 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:54:22 crc kubenswrapper[4676]: I0124 00:54:22.502171 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vsfsj" Jan 24 00:54:22 crc kubenswrapper[4676]: I0124 00:54:22.587548 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vsfsj"] Jan 24 00:54:22 crc kubenswrapper[4676]: I0124 00:54:22.650364 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h6lxx"] Jan 24 00:54:22 crc kubenswrapper[4676]: I0124 00:54:22.650662 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h6lxx" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerName="registry-server" containerID="cri-o://08fb9c82277ed77127a24cba780debc02f8a59da1333b461232dac0199f577d3" gracePeriod=2 Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.188495 4676 generic.go:334] "Generic (PLEG): container finished" podID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerID="08fb9c82277ed77127a24cba780debc02f8a59da1333b461232dac0199f577d3" exitCode=0 Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.188738 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6lxx" event={"ID":"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec","Type":"ContainerDied","Data":"08fb9c82277ed77127a24cba780debc02f8a59da1333b461232dac0199f577d3"} Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.189083 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6lxx" event={"ID":"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec","Type":"ContainerDied","Data":"fb5783295da5666c028415e55625575a6eddcc0d1dfa67d4345fe77ee0fb6e6b"} Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.189115 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb5783295da5666c028415e55625575a6eddcc0d1dfa67d4345fe77ee0fb6e6b" Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.227325 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.254084 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdlrw\" (UniqueName: \"kubernetes.io/projected/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-kube-api-access-wdlrw\") pod \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.254156 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-utilities\") pod \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.254186 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-catalog-content\") pod \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\" (UID: \"8b3f0cd4-dad6-4882-8cf8-03e0a23768ec\") " Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.255069 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-utilities" (OuterVolumeSpecName: "utilities") pod "8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" (UID: "8b3f0cd4-dad6-4882-8cf8-03e0a23768ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.262719 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-kube-api-access-wdlrw" (OuterVolumeSpecName: "kube-api-access-wdlrw") pod "8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" (UID: "8b3f0cd4-dad6-4882-8cf8-03e0a23768ec"). InnerVolumeSpecName "kube-api-access-wdlrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.315647 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" (UID: "8b3f0cd4-dad6-4882-8cf8-03e0a23768ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.356126 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdlrw\" (UniqueName: \"kubernetes.io/projected/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-kube-api-access-wdlrw\") on node \"crc\" DevicePath \"\"" Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.356162 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:54:23 crc kubenswrapper[4676]: I0124 00:54:23.356171 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:54:24 crc kubenswrapper[4676]: I0124 00:54:24.195525 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6lxx" Jan 24 00:54:24 crc kubenswrapper[4676]: I0124 00:54:24.223564 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h6lxx"] Jan 24 00:54:24 crc kubenswrapper[4676]: I0124 00:54:24.235411 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h6lxx"] Jan 24 00:54:24 crc kubenswrapper[4676]: I0124 00:54:24.267895 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" path="/var/lib/kubelet/pods/8b3f0cd4-dad6-4882-8cf8-03e0a23768ec/volumes" Jan 24 00:54:32 crc kubenswrapper[4676]: I0124 00:54:32.256334 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:54:32 crc kubenswrapper[4676]: E0124 00:54:32.256996 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:54:42 crc kubenswrapper[4676]: I0124 00:54:42.970200 4676 scope.go:117] "RemoveContainer" containerID="e1126bba560d6ad086386fdbfde6ce098ca6ec45eb8ef89aa5c2a014e09ba3c2" Jan 24 00:54:43 crc kubenswrapper[4676]: I0124 00:54:43.002556 4676 scope.go:117] "RemoveContainer" containerID="08fb9c82277ed77127a24cba780debc02f8a59da1333b461232dac0199f577d3" Jan 24 00:54:43 crc kubenswrapper[4676]: I0124 00:54:43.035683 4676 scope.go:117] "RemoveContainer" containerID="52bc77f8ab557f140d976553f763b957c850be181430d480625497a667852cf5" Jan 24 00:54:47 crc kubenswrapper[4676]: I0124 00:54:47.256498 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:54:47 crc kubenswrapper[4676]: E0124 00:54:47.257442 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:55:01 crc kubenswrapper[4676]: I0124 00:55:01.256669 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:55:01 crc kubenswrapper[4676]: E0124 00:55:01.257663 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:55:15 crc kubenswrapper[4676]: I0124 00:55:15.256589 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:55:15 crc kubenswrapper[4676]: E0124 00:55:15.259844 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:55:15 crc kubenswrapper[4676]: I0124 00:55:15.844036 4676 generic.go:334] "Generic (PLEG): container finished" podID="3e2adf44-9053-4dcd-9d47-27910710dbc8" containerID="5ce215c83e117e67d9aa3269233be260f6fe19170cac686c5414d50aaa942b7b" exitCode=0 Jan 24 00:55:15 crc kubenswrapper[4676]: I0124 00:55:15.844099 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3e2adf44-9053-4dcd-9d47-27910710dbc8","Type":"ContainerDied","Data":"5ce215c83e117e67d9aa3269233be260f6fe19170cac686c5414d50aaa942b7b"} Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.461598 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.572534 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config-secret\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.572599 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ca-certs\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.572671 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ssh-key\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.572729 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-config-data\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.572798 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-workdir\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.572821 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.572892 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.572970 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-temporary\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.573015 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwz6w\" (UniqueName: \"kubernetes.io/projected/3e2adf44-9053-4dcd-9d47-27910710dbc8-kube-api-access-dwz6w\") pod \"3e2adf44-9053-4dcd-9d47-27910710dbc8\" (UID: \"3e2adf44-9053-4dcd-9d47-27910710dbc8\") " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.575282 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-config-data" (OuterVolumeSpecName: "config-data") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.576154 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.580888 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.585878 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.602574 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e2adf44-9053-4dcd-9d47-27910710dbc8-kube-api-access-dwz6w" (OuterVolumeSpecName: "kube-api-access-dwz6w") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "kube-api-access-dwz6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.608583 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.621810 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.623696 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.642933 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3e2adf44-9053-4dcd-9d47-27910710dbc8" (UID: "3e2adf44-9053-4dcd-9d47-27910710dbc8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.674954 4676 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.675035 4676 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.675060 4676 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.675083 4676 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e2adf44-9053-4dcd-9d47-27910710dbc8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.675102 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwz6w\" (UniqueName: \"kubernetes.io/projected/3e2adf44-9053-4dcd-9d47-27910710dbc8-kube-api-access-dwz6w\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.675121 4676 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.675139 4676 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.675177 4676 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e2adf44-9053-4dcd-9d47-27910710dbc8-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.675193 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e2adf44-9053-4dcd-9d47-27910710dbc8-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.705941 4676 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.781111 4676 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.866913 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3e2adf44-9053-4dcd-9d47-27910710dbc8","Type":"ContainerDied","Data":"7398b97beedabcbadf8e82b5f7009c22665a40a681f29f7b52fa09e6bbc8c295"} Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.866972 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7398b97beedabcbadf8e82b5f7009c22665a40a681f29f7b52fa09e6bbc8c295" Jan 24 00:55:17 crc kubenswrapper[4676]: I0124 00:55:17.867073 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.125161 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 00:55:21 crc kubenswrapper[4676]: E0124 00:55:21.126605 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerName="extract-utilities" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.126637 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerName="extract-utilities" Jan 24 00:55:21 crc kubenswrapper[4676]: E0124 00:55:21.126671 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2adf44-9053-4dcd-9d47-27910710dbc8" containerName="tempest-tests-tempest-tests-runner" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.126686 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2adf44-9053-4dcd-9d47-27910710dbc8" containerName="tempest-tests-tempest-tests-runner" Jan 24 00:55:21 crc kubenswrapper[4676]: E0124 00:55:21.126714 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerName="registry-server" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.126728 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerName="registry-server" Jan 24 00:55:21 crc kubenswrapper[4676]: E0124 00:55:21.126778 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerName="extract-content" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.126792 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerName="extract-content" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.127176 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e2adf44-9053-4dcd-9d47-27910710dbc8" containerName="tempest-tests-tempest-tests-runner" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.127212 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3f0cd4-dad6-4882-8cf8-03e0a23768ec" containerName="registry-server" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.128369 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.132437 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-v9f6d" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.140053 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.253105 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dsvg\" (UniqueName: \"kubernetes.io/projected/e1574f42-e89a-40d4-b6da-2d4ef0824916-kube-api-access-2dsvg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1574f42-e89a-40d4-b6da-2d4ef0824916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.253341 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1574f42-e89a-40d4-b6da-2d4ef0824916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.357447 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dsvg\" (UniqueName: \"kubernetes.io/projected/e1574f42-e89a-40d4-b6da-2d4ef0824916-kube-api-access-2dsvg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1574f42-e89a-40d4-b6da-2d4ef0824916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.357586 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1574f42-e89a-40d4-b6da-2d4ef0824916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.358725 4676 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1574f42-e89a-40d4-b6da-2d4ef0824916\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.408879 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dsvg\" (UniqueName: \"kubernetes.io/projected/e1574f42-e89a-40d4-b6da-2d4ef0824916-kube-api-access-2dsvg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1574f42-e89a-40d4-b6da-2d4ef0824916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.410423 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1574f42-e89a-40d4-b6da-2d4ef0824916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.462511 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 00:55:21 crc kubenswrapper[4676]: I0124 00:55:21.937812 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 00:55:22 crc kubenswrapper[4676]: I0124 00:55:22.924751 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e1574f42-e89a-40d4-b6da-2d4ef0824916","Type":"ContainerStarted","Data":"0fe4334d32c0067c373d345a2e81177a0799a21f58306691839b8dcf3179e435"} Jan 24 00:55:23 crc kubenswrapper[4676]: I0124 00:55:23.939706 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e1574f42-e89a-40d4-b6da-2d4ef0824916","Type":"ContainerStarted","Data":"57f025dc489d997bdebb6abcb40b61a62627feda286f2eea27f4307b37064031"} Jan 24 00:55:23 crc kubenswrapper[4676]: I0124 00:55:23.963491 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.968105555 podStartE2EDuration="2.963461929s" podCreationTimestamp="2026-01-24 00:55:21 +0000 UTC" firstStartedPulling="2026-01-24 00:55:21.943934167 +0000 UTC m=+3105.973905188" lastFinishedPulling="2026-01-24 00:55:22.939290551 +0000 UTC m=+3106.969261562" observedRunningTime="2026-01-24 00:55:23.961835299 +0000 UTC m=+3107.991806350" watchObservedRunningTime="2026-01-24 00:55:23.963461929 +0000 UTC m=+3107.993432960" Jan 24 00:55:29 crc kubenswrapper[4676]: I0124 00:55:29.256353 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:55:29 crc kubenswrapper[4676]: E0124 00:55:29.258119 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:55:44 crc kubenswrapper[4676]: I0124 00:55:44.268718 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:55:44 crc kubenswrapper[4676]: E0124 00:55:44.269957 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.577546 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57xmz/must-gather-hv4gr"] Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.579293 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.582328 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-57xmz"/"kube-root-ca.crt" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.582954 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-57xmz"/"openshift-service-ca.crt" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.598636 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-57xmz/must-gather-hv4gr"] Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.711546 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a646626a-4429-4d8a-9f75-7b8bfade90ec-must-gather-output\") pod \"must-gather-hv4gr\" (UID: \"a646626a-4429-4d8a-9f75-7b8bfade90ec\") " pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.711686 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbztc\" (UniqueName: \"kubernetes.io/projected/a646626a-4429-4d8a-9f75-7b8bfade90ec-kube-api-access-jbztc\") pod \"must-gather-hv4gr\" (UID: \"a646626a-4429-4d8a-9f75-7b8bfade90ec\") " pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.813963 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a646626a-4429-4d8a-9f75-7b8bfade90ec-must-gather-output\") pod \"must-gather-hv4gr\" (UID: \"a646626a-4429-4d8a-9f75-7b8bfade90ec\") " pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.814024 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbztc\" (UniqueName: \"kubernetes.io/projected/a646626a-4429-4d8a-9f75-7b8bfade90ec-kube-api-access-jbztc\") pod \"must-gather-hv4gr\" (UID: \"a646626a-4429-4d8a-9f75-7b8bfade90ec\") " pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.814422 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a646626a-4429-4d8a-9f75-7b8bfade90ec-must-gather-output\") pod \"must-gather-hv4gr\" (UID: \"a646626a-4429-4d8a-9f75-7b8bfade90ec\") " pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.845057 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbztc\" (UniqueName: \"kubernetes.io/projected/a646626a-4429-4d8a-9f75-7b8bfade90ec-kube-api-access-jbztc\") pod \"must-gather-hv4gr\" (UID: \"a646626a-4429-4d8a-9f75-7b8bfade90ec\") " pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 00:55:45 crc kubenswrapper[4676]: I0124 00:55:45.896945 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 00:55:46 crc kubenswrapper[4676]: I0124 00:55:46.335125 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-57xmz/must-gather-hv4gr"] Jan 24 00:55:47 crc kubenswrapper[4676]: I0124 00:55:47.174855 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/must-gather-hv4gr" event={"ID":"a646626a-4429-4d8a-9f75-7b8bfade90ec","Type":"ContainerStarted","Data":"43c717b44f07040232b60ea2476de8679d1451b8e216e87add1868ef3142ef40"} Jan 24 00:55:54 crc kubenswrapper[4676]: I0124 00:55:54.246611 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/must-gather-hv4gr" event={"ID":"a646626a-4429-4d8a-9f75-7b8bfade90ec","Type":"ContainerStarted","Data":"5282e9521531b7fbcb12f5cd407d932cd2321b1dd6ad9e7d8b88355c7f251d0b"} Jan 24 00:55:54 crc kubenswrapper[4676]: I0124 00:55:54.247264 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/must-gather-hv4gr" event={"ID":"a646626a-4429-4d8a-9f75-7b8bfade90ec","Type":"ContainerStarted","Data":"e762fbd56fb888c091ce2f8ff2b6ad8a52059af7b071066cce8729c1ec864419"} Jan 24 00:55:54 crc kubenswrapper[4676]: I0124 00:55:54.282986 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-57xmz/must-gather-hv4gr" podStartSLOduration=2.456964465 podStartE2EDuration="9.282965963s" podCreationTimestamp="2026-01-24 00:55:45 +0000 UTC" firstStartedPulling="2026-01-24 00:55:46.343482517 +0000 UTC m=+3130.373453518" lastFinishedPulling="2026-01-24 00:55:53.169484015 +0000 UTC m=+3137.199455016" observedRunningTime="2026-01-24 00:55:54.272708298 +0000 UTC m=+3138.302679299" watchObservedRunningTime="2026-01-24 00:55:54.282965963 +0000 UTC m=+3138.312936974" Jan 24 00:55:57 crc kubenswrapper[4676]: I0124 00:55:57.841029 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57xmz/crc-debug-549dl"] Jan 24 00:55:57 crc kubenswrapper[4676]: I0124 00:55:57.842504 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:55:57 crc kubenswrapper[4676]: I0124 00:55:57.845196 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-57xmz"/"default-dockercfg-qwqnc" Jan 24 00:55:57 crc kubenswrapper[4676]: I0124 00:55:57.956842 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsvs9\" (UniqueName: \"kubernetes.io/projected/fffc7981-c6a6-4477-8d51-b33a0fc21549-kube-api-access-hsvs9\") pod \"crc-debug-549dl\" (UID: \"fffc7981-c6a6-4477-8d51-b33a0fc21549\") " pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:55:57 crc kubenswrapper[4676]: I0124 00:55:57.956944 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fffc7981-c6a6-4477-8d51-b33a0fc21549-host\") pod \"crc-debug-549dl\" (UID: \"fffc7981-c6a6-4477-8d51-b33a0fc21549\") " pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:55:58 crc kubenswrapper[4676]: I0124 00:55:58.061358 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsvs9\" (UniqueName: \"kubernetes.io/projected/fffc7981-c6a6-4477-8d51-b33a0fc21549-kube-api-access-hsvs9\") pod \"crc-debug-549dl\" (UID: \"fffc7981-c6a6-4477-8d51-b33a0fc21549\") " pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:55:58 crc kubenswrapper[4676]: I0124 00:55:58.061458 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fffc7981-c6a6-4477-8d51-b33a0fc21549-host\") pod \"crc-debug-549dl\" (UID: \"fffc7981-c6a6-4477-8d51-b33a0fc21549\") " pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:55:58 crc kubenswrapper[4676]: I0124 00:55:58.061570 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fffc7981-c6a6-4477-8d51-b33a0fc21549-host\") pod \"crc-debug-549dl\" (UID: \"fffc7981-c6a6-4477-8d51-b33a0fc21549\") " pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:55:58 crc kubenswrapper[4676]: I0124 00:55:58.084867 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsvs9\" (UniqueName: \"kubernetes.io/projected/fffc7981-c6a6-4477-8d51-b33a0fc21549-kube-api-access-hsvs9\") pod \"crc-debug-549dl\" (UID: \"fffc7981-c6a6-4477-8d51-b33a0fc21549\") " pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:55:58 crc kubenswrapper[4676]: I0124 00:55:58.162012 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:55:58 crc kubenswrapper[4676]: W0124 00:55:58.204125 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfffc7981_c6a6_4477_8d51_b33a0fc21549.slice/crio-7f3d83976e681330726b254205952d75eef12938a4edbadd8334f9e1893a505d WatchSource:0}: Error finding container 7f3d83976e681330726b254205952d75eef12938a4edbadd8334f9e1893a505d: Status 404 returned error can't find the container with id 7f3d83976e681330726b254205952d75eef12938a4edbadd8334f9e1893a505d Jan 24 00:55:58 crc kubenswrapper[4676]: I0124 00:55:58.255927 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:55:58 crc kubenswrapper[4676]: E0124 00:55:58.256155 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:55:58 crc kubenswrapper[4676]: I0124 00:55:58.284292 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/crc-debug-549dl" event={"ID":"fffc7981-c6a6-4477-8d51-b33a0fc21549","Type":"ContainerStarted","Data":"7f3d83976e681330726b254205952d75eef12938a4edbadd8334f9e1893a505d"} Jan 24 00:56:10 crc kubenswrapper[4676]: I0124 00:56:10.255826 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:56:10 crc kubenswrapper[4676]: E0124 00:56:10.256629 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:56:11 crc kubenswrapper[4676]: I0124 00:56:11.397700 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/crc-debug-549dl" event={"ID":"fffc7981-c6a6-4477-8d51-b33a0fc21549","Type":"ContainerStarted","Data":"fb528b96bb0ea9229c8a0baf4db69e4aad9f43be8642c6a3de48a10855ae6586"} Jan 24 00:56:11 crc kubenswrapper[4676]: I0124 00:56:11.418545 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-57xmz/crc-debug-549dl" podStartSLOduration=2.319340692 podStartE2EDuration="14.41852672s" podCreationTimestamp="2026-01-24 00:55:57 +0000 UTC" firstStartedPulling="2026-01-24 00:55:58.205579855 +0000 UTC m=+3142.235550856" lastFinishedPulling="2026-01-24 00:56:10.304765883 +0000 UTC m=+3154.334736884" observedRunningTime="2026-01-24 00:56:11.414854367 +0000 UTC m=+3155.444825378" watchObservedRunningTime="2026-01-24 00:56:11.41852672 +0000 UTC m=+3155.448497731" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.212958 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hkwvj"] Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.216104 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.226358 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkwvj"] Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.387402 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-utilities\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.387514 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvv8z\" (UniqueName: \"kubernetes.io/projected/57b3d882-428b-47c9-97e6-9c26f4334f3e-kube-api-access-lvv8z\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.388266 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-catalog-content\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.490787 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-utilities\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.490844 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvv8z\" (UniqueName: \"kubernetes.io/projected/57b3d882-428b-47c9-97e6-9c26f4334f3e-kube-api-access-lvv8z\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.491640 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-utilities\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.491980 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-catalog-content\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.492418 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-catalog-content\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.522971 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvv8z\" (UniqueName: \"kubernetes.io/projected/57b3d882-428b-47c9-97e6-9c26f4334f3e-kube-api-access-lvv8z\") pod \"community-operators-hkwvj\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:17 crc kubenswrapper[4676]: I0124 00:56:17.564619 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:19 crc kubenswrapper[4676]: I0124 00:56:19.021464 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkwvj"] Jan 24 00:56:19 crc kubenswrapper[4676]: I0124 00:56:19.466133 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwvj" event={"ID":"57b3d882-428b-47c9-97e6-9c26f4334f3e","Type":"ContainerStarted","Data":"a0eed663c019c0254d4baf55709bf2bd4422bac18c67da9b1501732b33846d24"} Jan 24 00:56:20 crc kubenswrapper[4676]: I0124 00:56:20.475212 4676 generic.go:334] "Generic (PLEG): container finished" podID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerID="425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a" exitCode=0 Jan 24 00:56:20 crc kubenswrapper[4676]: I0124 00:56:20.475389 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwvj" event={"ID":"57b3d882-428b-47c9-97e6-9c26f4334f3e","Type":"ContainerDied","Data":"425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a"} Jan 24 00:56:21 crc kubenswrapper[4676]: I0124 00:56:21.486216 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwvj" event={"ID":"57b3d882-428b-47c9-97e6-9c26f4334f3e","Type":"ContainerStarted","Data":"5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58"} Jan 24 00:56:22 crc kubenswrapper[4676]: I0124 00:56:22.500156 4676 generic.go:334] "Generic (PLEG): container finished" podID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerID="5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58" exitCode=0 Jan 24 00:56:22 crc kubenswrapper[4676]: I0124 00:56:22.500235 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwvj" event={"ID":"57b3d882-428b-47c9-97e6-9c26f4334f3e","Type":"ContainerDied","Data":"5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58"} Jan 24 00:56:25 crc kubenswrapper[4676]: I0124 00:56:25.257030 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:56:25 crc kubenswrapper[4676]: E0124 00:56:25.257672 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:56:25 crc kubenswrapper[4676]: I0124 00:56:25.524681 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwvj" event={"ID":"57b3d882-428b-47c9-97e6-9c26f4334f3e","Type":"ContainerStarted","Data":"00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127"} Jan 24 00:56:25 crc kubenswrapper[4676]: I0124 00:56:25.544631 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hkwvj" podStartSLOduration=5.979579741 podStartE2EDuration="8.544618059s" podCreationTimestamp="2026-01-24 00:56:17 +0000 UTC" firstStartedPulling="2026-01-24 00:56:20.477515417 +0000 UTC m=+3164.507486418" lastFinishedPulling="2026-01-24 00:56:23.042553735 +0000 UTC m=+3167.072524736" observedRunningTime="2026-01-24 00:56:25.541188563 +0000 UTC m=+3169.571159564" watchObservedRunningTime="2026-01-24 00:56:25.544618059 +0000 UTC m=+3169.574589060" Jan 24 00:56:27 crc kubenswrapper[4676]: I0124 00:56:27.547467 4676 generic.go:334] "Generic (PLEG): container finished" podID="fffc7981-c6a6-4477-8d51-b33a0fc21549" containerID="fb528b96bb0ea9229c8a0baf4db69e4aad9f43be8642c6a3de48a10855ae6586" exitCode=0 Jan 24 00:56:27 crc kubenswrapper[4676]: I0124 00:56:27.547654 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/crc-debug-549dl" event={"ID":"fffc7981-c6a6-4477-8d51-b33a0fc21549","Type":"ContainerDied","Data":"fb528b96bb0ea9229c8a0baf4db69e4aad9f43be8642c6a3de48a10855ae6586"} Jan 24 00:56:27 crc kubenswrapper[4676]: I0124 00:56:27.565396 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:27 crc kubenswrapper[4676]: I0124 00:56:27.566480 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:27 crc kubenswrapper[4676]: I0124 00:56:27.614581 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.643097 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.681676 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57xmz/crc-debug-549dl"] Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.694723 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57xmz/crc-debug-549dl"] Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.841495 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fffc7981-c6a6-4477-8d51-b33a0fc21549-host\") pod \"fffc7981-c6a6-4477-8d51-b33a0fc21549\" (UID: \"fffc7981-c6a6-4477-8d51-b33a0fc21549\") " Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.841668 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsvs9\" (UniqueName: \"kubernetes.io/projected/fffc7981-c6a6-4477-8d51-b33a0fc21549-kube-api-access-hsvs9\") pod \"fffc7981-c6a6-4477-8d51-b33a0fc21549\" (UID: \"fffc7981-c6a6-4477-8d51-b33a0fc21549\") " Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.841749 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fffc7981-c6a6-4477-8d51-b33a0fc21549-host" (OuterVolumeSpecName: "host") pod "fffc7981-c6a6-4477-8d51-b33a0fc21549" (UID: "fffc7981-c6a6-4477-8d51-b33a0fc21549"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.842159 4676 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fffc7981-c6a6-4477-8d51-b33a0fc21549-host\") on node \"crc\" DevicePath \"\"" Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.848617 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fffc7981-c6a6-4477-8d51-b33a0fc21549-kube-api-access-hsvs9" (OuterVolumeSpecName: "kube-api-access-hsvs9") pod "fffc7981-c6a6-4477-8d51-b33a0fc21549" (UID: "fffc7981-c6a6-4477-8d51-b33a0fc21549"). InnerVolumeSpecName "kube-api-access-hsvs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:56:28 crc kubenswrapper[4676]: I0124 00:56:28.943823 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsvs9\" (UniqueName: \"kubernetes.io/projected/fffc7981-c6a6-4477-8d51-b33a0fc21549-kube-api-access-hsvs9\") on node \"crc\" DevicePath \"\"" Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.565316 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/crc-debug-549dl" Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.566581 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f3d83976e681330726b254205952d75eef12938a4edbadd8334f9e1893a505d" Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.640168 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.691960 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkwvj"] Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.878231 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57xmz/crc-debug-xdbwf"] Jan 24 00:56:29 crc kubenswrapper[4676]: E0124 00:56:29.878641 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffc7981-c6a6-4477-8d51-b33a0fc21549" containerName="container-00" Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.878658 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffc7981-c6a6-4477-8d51-b33a0fc21549" containerName="container-00" Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.878808 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fffc7981-c6a6-4477-8d51-b33a0fc21549" containerName="container-00" Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.879366 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:29 crc kubenswrapper[4676]: I0124 00:56:29.882435 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-57xmz"/"default-dockercfg-qwqnc" Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.065670 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e133ba60-b33b-4289-a6a7-ef3827079dbb-host\") pod \"crc-debug-xdbwf\" (UID: \"e133ba60-b33b-4289-a6a7-ef3827079dbb\") " pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.065788 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcfd9\" (UniqueName: \"kubernetes.io/projected/e133ba60-b33b-4289-a6a7-ef3827079dbb-kube-api-access-xcfd9\") pod \"crc-debug-xdbwf\" (UID: \"e133ba60-b33b-4289-a6a7-ef3827079dbb\") " pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.167295 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcfd9\" (UniqueName: \"kubernetes.io/projected/e133ba60-b33b-4289-a6a7-ef3827079dbb-kube-api-access-xcfd9\") pod \"crc-debug-xdbwf\" (UID: \"e133ba60-b33b-4289-a6a7-ef3827079dbb\") " pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.167432 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e133ba60-b33b-4289-a6a7-ef3827079dbb-host\") pod \"crc-debug-xdbwf\" (UID: \"e133ba60-b33b-4289-a6a7-ef3827079dbb\") " pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.167540 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e133ba60-b33b-4289-a6a7-ef3827079dbb-host\") pod \"crc-debug-xdbwf\" (UID: \"e133ba60-b33b-4289-a6a7-ef3827079dbb\") " pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.190546 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcfd9\" (UniqueName: \"kubernetes.io/projected/e133ba60-b33b-4289-a6a7-ef3827079dbb-kube-api-access-xcfd9\") pod \"crc-debug-xdbwf\" (UID: \"e133ba60-b33b-4289-a6a7-ef3827079dbb\") " pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.202090 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:30 crc kubenswrapper[4676]: W0124 00:56:30.255839 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode133ba60_b33b_4289_a6a7_ef3827079dbb.slice/crio-a2bff01c8277b0be8b9a906d9b3e4b0421a15dcefbe34a184e886e13d3684e31 WatchSource:0}: Error finding container a2bff01c8277b0be8b9a906d9b3e4b0421a15dcefbe34a184e886e13d3684e31: Status 404 returned error can't find the container with id a2bff01c8277b0be8b9a906d9b3e4b0421a15dcefbe34a184e886e13d3684e31 Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.273210 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fffc7981-c6a6-4477-8d51-b33a0fc21549" path="/var/lib/kubelet/pods/fffc7981-c6a6-4477-8d51-b33a0fc21549/volumes" Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.572155 4676 generic.go:334] "Generic (PLEG): container finished" podID="e133ba60-b33b-4289-a6a7-ef3827079dbb" containerID="c2d2cba815cc8f80eb4e37a292a7de3ac1d314214ffa71dbce141268908119a0" exitCode=1 Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.572258 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/crc-debug-xdbwf" event={"ID":"e133ba60-b33b-4289-a6a7-ef3827079dbb","Type":"ContainerDied","Data":"c2d2cba815cc8f80eb4e37a292a7de3ac1d314214ffa71dbce141268908119a0"} Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.572471 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/crc-debug-xdbwf" event={"ID":"e133ba60-b33b-4289-a6a7-ef3827079dbb","Type":"ContainerStarted","Data":"a2bff01c8277b0be8b9a906d9b3e4b0421a15dcefbe34a184e886e13d3684e31"} Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.609477 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57xmz/crc-debug-xdbwf"] Jan 24 00:56:30 crc kubenswrapper[4676]: I0124 00:56:30.619603 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57xmz/crc-debug-xdbwf"] Jan 24 00:56:31 crc kubenswrapper[4676]: I0124 00:56:31.579394 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hkwvj" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerName="registry-server" containerID="cri-o://00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127" gracePeriod=2 Jan 24 00:56:31 crc kubenswrapper[4676]: I0124 00:56:31.752157 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:31 crc kubenswrapper[4676]: I0124 00:56:31.910439 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e133ba60-b33b-4289-a6a7-ef3827079dbb-host\") pod \"e133ba60-b33b-4289-a6a7-ef3827079dbb\" (UID: \"e133ba60-b33b-4289-a6a7-ef3827079dbb\") " Jan 24 00:56:31 crc kubenswrapper[4676]: I0124 00:56:31.910558 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcfd9\" (UniqueName: \"kubernetes.io/projected/e133ba60-b33b-4289-a6a7-ef3827079dbb-kube-api-access-xcfd9\") pod \"e133ba60-b33b-4289-a6a7-ef3827079dbb\" (UID: \"e133ba60-b33b-4289-a6a7-ef3827079dbb\") " Jan 24 00:56:31 crc kubenswrapper[4676]: I0124 00:56:31.910578 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e133ba60-b33b-4289-a6a7-ef3827079dbb-host" (OuterVolumeSpecName: "host") pod "e133ba60-b33b-4289-a6a7-ef3827079dbb" (UID: "e133ba60-b33b-4289-a6a7-ef3827079dbb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 00:56:31 crc kubenswrapper[4676]: I0124 00:56:31.911110 4676 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e133ba60-b33b-4289-a6a7-ef3827079dbb-host\") on node \"crc\" DevicePath \"\"" Jan 24 00:56:31 crc kubenswrapper[4676]: I0124 00:56:31.923523 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e133ba60-b33b-4289-a6a7-ef3827079dbb-kube-api-access-xcfd9" (OuterVolumeSpecName: "kube-api-access-xcfd9") pod "e133ba60-b33b-4289-a6a7-ef3827079dbb" (UID: "e133ba60-b33b-4289-a6a7-ef3827079dbb"). InnerVolumeSpecName "kube-api-access-xcfd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.007264 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.012796 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvv8z\" (UniqueName: \"kubernetes.io/projected/57b3d882-428b-47c9-97e6-9c26f4334f3e-kube-api-access-lvv8z\") pod \"57b3d882-428b-47c9-97e6-9c26f4334f3e\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.012970 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-utilities\") pod \"57b3d882-428b-47c9-97e6-9c26f4334f3e\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.012995 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-catalog-content\") pod \"57b3d882-428b-47c9-97e6-9c26f4334f3e\" (UID: \"57b3d882-428b-47c9-97e6-9c26f4334f3e\") " Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.013471 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcfd9\" (UniqueName: \"kubernetes.io/projected/e133ba60-b33b-4289-a6a7-ef3827079dbb-kube-api-access-xcfd9\") on node \"crc\" DevicePath \"\"" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.013950 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-utilities" (OuterVolumeSpecName: "utilities") pod "57b3d882-428b-47c9-97e6-9c26f4334f3e" (UID: "57b3d882-428b-47c9-97e6-9c26f4334f3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.016116 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b3d882-428b-47c9-97e6-9c26f4334f3e-kube-api-access-lvv8z" (OuterVolumeSpecName: "kube-api-access-lvv8z") pod "57b3d882-428b-47c9-97e6-9c26f4334f3e" (UID: "57b3d882-428b-47c9-97e6-9c26f4334f3e"). InnerVolumeSpecName "kube-api-access-lvv8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.091830 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57b3d882-428b-47c9-97e6-9c26f4334f3e" (UID: "57b3d882-428b-47c9-97e6-9c26f4334f3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.115789 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.115821 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b3d882-428b-47c9-97e6-9c26f4334f3e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.115833 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvv8z\" (UniqueName: \"kubernetes.io/projected/57b3d882-428b-47c9-97e6-9c26f4334f3e-kube-api-access-lvv8z\") on node \"crc\" DevicePath \"\"" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.268965 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e133ba60-b33b-4289-a6a7-ef3827079dbb" path="/var/lib/kubelet/pods/e133ba60-b33b-4289-a6a7-ef3827079dbb/volumes" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.598394 4676 scope.go:117] "RemoveContainer" containerID="c2d2cba815cc8f80eb4e37a292a7de3ac1d314214ffa71dbce141268908119a0" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.598413 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/crc-debug-xdbwf" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.606715 4676 generic.go:334] "Generic (PLEG): container finished" podID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerID="00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127" exitCode=0 Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.606825 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwvj" event={"ID":"57b3d882-428b-47c9-97e6-9c26f4334f3e","Type":"ContainerDied","Data":"00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127"} Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.606887 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwvj" event={"ID":"57b3d882-428b-47c9-97e6-9c26f4334f3e","Type":"ContainerDied","Data":"a0eed663c019c0254d4baf55709bf2bd4422bac18c67da9b1501732b33846d24"} Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.607102 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwvj" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.656672 4676 scope.go:117] "RemoveContainer" containerID="00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.704293 4676 scope.go:117] "RemoveContainer" containerID="5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.708061 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkwvj"] Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.718468 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hkwvj"] Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.747857 4676 scope.go:117] "RemoveContainer" containerID="425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.770990 4676 scope.go:117] "RemoveContainer" containerID="00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127" Jan 24 00:56:32 crc kubenswrapper[4676]: E0124 00:56:32.774498 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127\": container with ID starting with 00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127 not found: ID does not exist" containerID="00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.774552 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127"} err="failed to get container status \"00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127\": rpc error: code = NotFound desc = could not find container \"00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127\": container with ID starting with 00383a41a004aac9747fb6b4d29966344c64ce12b2249d555c82a587d77fb127 not found: ID does not exist" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.774586 4676 scope.go:117] "RemoveContainer" containerID="5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58" Jan 24 00:56:32 crc kubenswrapper[4676]: E0124 00:56:32.774918 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58\": container with ID starting with 5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58 not found: ID does not exist" containerID="5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.774952 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58"} err="failed to get container status \"5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58\": rpc error: code = NotFound desc = could not find container \"5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58\": container with ID starting with 5381f8e56de8f59e8cd953bea2d31650286f7d0073378850f8e97c4bf617cb58 not found: ID does not exist" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.774976 4676 scope.go:117] "RemoveContainer" containerID="425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a" Jan 24 00:56:32 crc kubenswrapper[4676]: E0124 00:56:32.775254 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a\": container with ID starting with 425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a not found: ID does not exist" containerID="425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a" Jan 24 00:56:32 crc kubenswrapper[4676]: I0124 00:56:32.775280 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a"} err="failed to get container status \"425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a\": rpc error: code = NotFound desc = could not find container \"425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a\": container with ID starting with 425d5cde97e1698c9bba080ab311dd33c110e8b51f818c03479ea99d3bf0e87a not found: ID does not exist" Jan 24 00:56:34 crc kubenswrapper[4676]: I0124 00:56:34.267689 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" path="/var/lib/kubelet/pods/57b3d882-428b-47c9-97e6-9c26f4334f3e/volumes" Jan 24 00:56:40 crc kubenswrapper[4676]: I0124 00:56:40.256175 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:56:40 crc kubenswrapper[4676]: E0124 00:56:40.257090 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:56:55 crc kubenswrapper[4676]: I0124 00:56:55.255668 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:56:55 crc kubenswrapper[4676]: E0124 00:56:55.256451 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:57:10 crc kubenswrapper[4676]: I0124 00:57:10.255860 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:57:10 crc kubenswrapper[4676]: E0124 00:57:10.256571 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:57:18 crc kubenswrapper[4676]: I0124 00:57:18.971911 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54b7b855f4-s49zw_c015646b-9361-4b5c-b465-a66d7fc5cc53/barbican-api/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.121185 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54b7b855f4-s49zw_c015646b-9361-4b5c-b465-a66d7fc5cc53/barbican-api-log/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.185823 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-79ccd69c74-nj8k6_a05052ce-062e-423c-80cf-78349e42718f/barbican-keystone-listener/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.247650 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-79ccd69c74-nj8k6_a05052ce-062e-423c-80cf-78349e42718f/barbican-keystone-listener-log/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.401604 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68bd7fb46c-sflbz_009f35e0-3a98-453b-b92e-8db9e5c92798/barbican-worker-log/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.410070 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68bd7fb46c-sflbz_009f35e0-3a98-453b-b92e-8db9e5c92798/barbican-worker/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.613815 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp_1fd4e1f4-0772-493a-b929-6e93470f9abf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.725126 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58bbd77a-9518-4037-b96b-a1490082fb04/ceilometer-notification-agent/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.734104 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58bbd77a-9518-4037-b96b-a1490082fb04/ceilometer-central-agent/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.841206 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58bbd77a-9518-4037-b96b-a1490082fb04/proxy-httpd/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.940752 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d38cef52-387b-4633-be75-6dc455ad53c4/cinder-api/0.log" Jan 24 00:57:19 crc kubenswrapper[4676]: I0124 00:57:19.972597 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58bbd77a-9518-4037-b96b-a1490082fb04/sg-core/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.079937 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d38cef52-387b-4633-be75-6dc455ad53c4/cinder-api-log/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.231039 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978/cinder-scheduler/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.279063 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978/probe/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.430103 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg_7f552002-93ef-485f-9227-a94733534466/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.568107 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7_f6bc5ee4-f730-4e1e-9684-b643daed2519/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.677499 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b865b64bc-drclt_cdf394f7-5d67-4a0f-9644-82fe83a72e2d/init/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.840723 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b865b64bc-drclt_cdf394f7-5d67-4a0f-9644-82fe83a72e2d/init/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.934723 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b865b64bc-drclt_cdf394f7-5d67-4a0f-9644-82fe83a72e2d/dnsmasq-dns/0.log" Jan 24 00:57:20 crc kubenswrapper[4676]: I0124 00:57:20.964822 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-skh8k_7e20d58a-5f01-4c23-9ab0-650d3ae76844/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.127085 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a6cef518-8385-4315-83d5-f46a6144d5a0/glance-httpd/0.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.138907 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a6cef518-8385-4315-83d5-f46a6144d5a0/glance-log/0.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.298992 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_042ec835-b8c1-43be-a19d-d70f76128e26/glance-log/0.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.342328 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_042ec835-b8c1-43be-a19d-d70f76128e26/glance-httpd/0.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.483228 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f876ddf46-fs7qv_ac7dce6b-3bd9-4ad9-9485-83d9384b8bad/horizon/1.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.559811 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f876ddf46-fs7qv_ac7dce6b-3bd9-4ad9-9485-83d9384b8bad/horizon/0.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.739976 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs_e9d2a92f-e22e-44c5-86c5-8b38824e3d4c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.869954 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f876ddf46-fs7qv_ac7dce6b-3bd9-4ad9-9485-83d9384b8bad/horizon-log/0.log" Jan 24 00:57:21 crc kubenswrapper[4676]: I0124 00:57:21.971345 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-sz24c_15fbef8d-e606-4b72-a994-e71d03e8fec8/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:22 crc kubenswrapper[4676]: I0124 00:57:22.127131 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-849b597d57-kw79c_29bf9e4b-4fb3-41f4-9280-f7ea2e61a844/keystone-api/0.log" Jan 24 00:57:22 crc kubenswrapper[4676]: I0124 00:57:22.202326 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_25ee749c-0b84-4abd-9fe0-a6f23c0c912d/kube-state-metrics/0.log" Jan 24 00:57:22 crc kubenswrapper[4676]: I0124 00:57:22.345850 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ddk99_2cafe497-da96-4b39-bec2-1ec54f859303/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:22 crc kubenswrapper[4676]: I0124 00:57:22.978580 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bd579cfd9-q4npp_6dda8455-6777-4efc-abf3-df547cf58339/neutron-api/0.log" Jan 24 00:57:23 crc kubenswrapper[4676]: I0124 00:57:23.077188 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bd579cfd9-q4npp_6dda8455-6777-4efc-abf3-df547cf58339/neutron-httpd/0.log" Jan 24 00:57:23 crc kubenswrapper[4676]: I0124 00:57:23.270945 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7_cba640eb-c65f-46be-af5d-5126418c361a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:23 crc kubenswrapper[4676]: I0124 00:57:23.601334 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ff542eb3-0142-4d46-a6e6-2e89c73f5824/nova-api-log/0.log" Jan 24 00:57:23 crc kubenswrapper[4676]: I0124 00:57:23.716801 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ff542eb3-0142-4d46-a6e6-2e89c73f5824/nova-api-api/0.log" Jan 24 00:57:23 crc kubenswrapper[4676]: I0124 00:57:23.726003 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_400ba963-913a-401c-8f2e-21005977e0c2/nova-cell0-conductor-conductor/0.log" Jan 24 00:57:23 crc kubenswrapper[4676]: I0124 00:57:23.945517 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4420d197-3908-406b-9661-67d64ccd7768/nova-cell1-conductor-conductor/0.log" Jan 24 00:57:24 crc kubenswrapper[4676]: I0124 00:57:24.095892 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b1499ed8-1041-4980-b9da-cb957cbf215c/nova-cell1-novncproxy-novncproxy/0.log" Jan 24 00:57:24 crc kubenswrapper[4676]: I0124 00:57:24.295256 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5cl2g_75cae94d-b818-4f92-b42d-fd8cec63a657/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:24 crc kubenswrapper[4676]: I0124 00:57:24.464625 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f1085d48-78f1-4437-a518-239ba90b7c0b/nova-metadata-log/0.log" Jan 24 00:57:24 crc kubenswrapper[4676]: I0124 00:57:24.755413 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8ff6f87b-a571-43da-9fbc-9203f0001771/nova-scheduler-scheduler/0.log" Jan 24 00:57:24 crc kubenswrapper[4676]: I0124 00:57:24.815964 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_19365292-50d8-4e94-952f-2df7ee20f0ba/mysql-bootstrap/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.068366 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_19365292-50d8-4e94-952f-2df7ee20f0ba/mysql-bootstrap/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.099531 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_19365292-50d8-4e94-952f-2df7ee20f0ba/galera/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.255201 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:57:25 crc kubenswrapper[4676]: E0124 00:57:25.255449 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.280137 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2bbaae64-ac2d-43c6-8984-5483f2eb4211/mysql-bootstrap/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.353904 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f1085d48-78f1-4437-a518-239ba90b7c0b/nova-metadata-metadata/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.578317 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2bbaae64-ac2d-43c6-8984-5483f2eb4211/mysql-bootstrap/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.643031 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f3fa6e59-785d-4d40-8d73-170552068e43/openstackclient/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.650697 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2bbaae64-ac2d-43c6-8984-5483f2eb4211/galera/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.821393 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jsd4n_7adbcf83-efbd-4e8d-97e5-f8768463284a/ovn-controller/0.log" Jan 24 00:57:25 crc kubenswrapper[4676]: I0124 00:57:25.956307 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mq6ns_21550436-cd71-46d8-838e-c51c19ddf8ff/openstack-network-exporter/0.log" Jan 24 00:57:26 crc kubenswrapper[4676]: I0124 00:57:26.151673 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sl9q_427fbd2d-16ef-44a6-a71d-8172f56b863d/ovsdb-server-init/0.log" Jan 24 00:57:26 crc kubenswrapper[4676]: I0124 00:57:26.450044 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sl9q_427fbd2d-16ef-44a6-a71d-8172f56b863d/ovsdb-server-init/0.log" Jan 24 00:57:26 crc kubenswrapper[4676]: I0124 00:57:26.457369 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sl9q_427fbd2d-16ef-44a6-a71d-8172f56b863d/ovs-vswitchd/0.log" Jan 24 00:57:26 crc kubenswrapper[4676]: I0124 00:57:26.462997 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sl9q_427fbd2d-16ef-44a6-a71d-8172f56b863d/ovsdb-server/0.log" Jan 24 00:57:26 crc kubenswrapper[4676]: I0124 00:57:26.723910 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1486ea92-d267-49cd-8516-d474ef25c2df/openstack-network-exporter/0.log" Jan 24 00:57:26 crc kubenswrapper[4676]: I0124 00:57:26.765053 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-527bv_55444bfa-a024-4606-aa57-6456c6688e52/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:26 crc kubenswrapper[4676]: I0124 00:57:26.874096 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1486ea92-d267-49cd-8516-d474ef25c2df/ovn-northd/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.001846 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3bff778a-b10f-4ba9-a12f-f4086608fd30/openstack-network-exporter/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.113216 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3bff778a-b10f-4ba9-a12f-f4086608fd30/ovsdbserver-nb/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.214628 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3648c65-9fcb-4a9e-b4cb-d8437dc00141/openstack-network-exporter/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.290580 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3648c65-9fcb-4a9e-b4cb-d8437dc00141/ovsdbserver-sb/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.438486 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-765f6cdf58-5q9v9_21e63383-223a-4247-8589-03ab5a33f980/placement-api/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.566421 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-765f6cdf58-5q9v9_21e63383-223a-4247-8589-03ab5a33f980/placement-log/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.607662 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2df1d42-fa93-4771-ba77-1c27f820b298/setup-container/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.865941 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2df1d42-fa93-4771-ba77-1c27f820b298/setup-container/0.log" Jan 24 00:57:27 crc kubenswrapper[4676]: I0124 00:57:27.933014 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2df1d42-fa93-4771-ba77-1c27f820b298/rabbitmq/0.log" Jan 24 00:57:28 crc kubenswrapper[4676]: I0124 00:57:28.020624 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c162e478-58e3-4a83-97cb-29887613c1aa/setup-container/0.log" Jan 24 00:57:28 crc kubenswrapper[4676]: I0124 00:57:28.198359 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c162e478-58e3-4a83-97cb-29887613c1aa/setup-container/0.log" Jan 24 00:57:28 crc kubenswrapper[4676]: I0124 00:57:28.307116 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c162e478-58e3-4a83-97cb-29887613c1aa/rabbitmq/0.log" Jan 24 00:57:28 crc kubenswrapper[4676]: I0124 00:57:28.318266 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-448ht_447c1e1f-d798-4bcc-a8ef-91d4ad5426a5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:28 crc kubenswrapper[4676]: I0124 00:57:28.607908 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qx6r8_19b712ec-28a7-419f-9f09-7d0b0ecbf747/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:28 crc kubenswrapper[4676]: I0124 00:57:28.624390 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd_15cfbdc6-1f3b-49a5-8822-c4af1e686731/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:28 crc kubenswrapper[4676]: I0124 00:57:28.947951 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mhrvn_2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:28 crc kubenswrapper[4676]: I0124 00:57:28.970170 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8b9rz_d1ed73dc-4392-4d20-a592-4a8c5ba9c104/ssh-known-hosts-edpm-deployment/0.log" Jan 24 00:57:29 crc kubenswrapper[4676]: I0124 00:57:29.265978 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f47776b4c-v4xb2_b976b9e2-b80e-4626-919d-3bb84f0151e8/proxy-server/0.log" Jan 24 00:57:29 crc kubenswrapper[4676]: I0124 00:57:29.379245 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f47776b4c-v4xb2_b976b9e2-b80e-4626-919d-3bb84f0151e8/proxy-httpd/0.log" Jan 24 00:57:29 crc kubenswrapper[4676]: I0124 00:57:29.432293 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5fmzb_4ff61f48-e451-47e8-adcc-0870b29d28a9/swift-ring-rebalance/0.log" Jan 24 00:57:29 crc kubenswrapper[4676]: I0124 00:57:29.633434 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/account-reaper/0.log" Jan 24 00:57:29 crc kubenswrapper[4676]: I0124 00:57:29.674408 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/account-auditor/0.log" Jan 24 00:57:29 crc kubenswrapper[4676]: I0124 00:57:29.696999 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/account-replicator/0.log" Jan 24 00:57:29 crc kubenswrapper[4676]: I0124 00:57:29.830554 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/container-auditor/0.log" Jan 24 00:57:29 crc kubenswrapper[4676]: I0124 00:57:29.900091 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/account-server/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.060159 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/container-replicator/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.064485 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/container-server/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.336322 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/container-updater/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.353024 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-auditor/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.470174 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-expirer/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.489558 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-replicator/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.673676 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-updater/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.688432 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/rsync/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.714073 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-server/0.log" Jan 24 00:57:30 crc kubenswrapper[4676]: I0124 00:57:30.832330 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/swift-recon-cron/0.log" Jan 24 00:57:31 crc kubenswrapper[4676]: I0124 00:57:31.086902 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv_7ef31551-e4ed-48d0-a4d6-f9c2fb515966/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:31 crc kubenswrapper[4676]: I0124 00:57:31.119548 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3e2adf44-9053-4dcd-9d47-27910710dbc8/tempest-tests-tempest-tests-runner/0.log" Jan 24 00:57:31 crc kubenswrapper[4676]: I0124 00:57:31.538606 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e1574f42-e89a-40d4-b6da-2d4ef0824916/test-operator-logs-container/0.log" Jan 24 00:57:31 crc kubenswrapper[4676]: I0124 00:57:31.640077 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd_83640815-cc06-4abe-a06f-20a1f8798609/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 00:57:38 crc kubenswrapper[4676]: I0124 00:57:38.255992 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:57:38 crc kubenswrapper[4676]: E0124 00:57:38.256644 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:57:43 crc kubenswrapper[4676]: I0124 00:57:43.716461 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_545f6045-cf2f-4d4b-91d8-227148ddd71a/memcached/0.log" Jan 24 00:57:53 crc kubenswrapper[4676]: I0124 00:57:53.255929 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:57:53 crc kubenswrapper[4676]: E0124 00:57:53.256678 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.178515 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/util/0.log" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.288744 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/util/0.log" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.342669 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/pull/0.log" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.342782 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/pull/0.log" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.564433 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/pull/0.log" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.571796 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/util/0.log" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.616998 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/extract/0.log" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.873046 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-c8c6m_0cd05b9f-6699-46e3-ae36-9f21352e6c8e/manager/0.log" Jan 24 00:58:01 crc kubenswrapper[4676]: I0124 00:58:01.902633 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-fgplq_02123851-7d2f-477b-9c60-5a9922a0bc97/manager/0.log" Jan 24 00:58:02 crc kubenswrapper[4676]: I0124 00:58:02.003592 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-js6db_a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85/manager/0.log" Jan 24 00:58:02 crc kubenswrapper[4676]: I0124 00:58:02.114062 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-btxnv_29e4b64d-19bd-419b-9e29-7a41e6f12ae0/manager/0.log" Jan 24 00:58:02 crc kubenswrapper[4676]: I0124 00:58:02.200163 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-p2nr8_5e9cf1cb-c413-45ad-8a51-bf35407fcdfe/manager/0.log" Jan 24 00:58:02 crc kubenswrapper[4676]: I0124 00:58:02.347385 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-rbzsj_6b8541f9-a37a-41d6-8006-3d0335c3abb5/manager/0.log" Jan 24 00:58:02 crc kubenswrapper[4676]: I0124 00:58:02.660903 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-v25h4_555ebb8f-1bc3-4b8d-9f37-cad92b48477c/manager/0.log" Jan 24 00:58:02 crc kubenswrapper[4676]: I0124 00:58:02.667075 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-jxx26_921e121c-5261-4fe7-8171-6b634babedf4/manager/0.log" Jan 24 00:58:02 crc kubenswrapper[4676]: I0124 00:58:02.809095 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-2gzqg_9a3f9a14-1138-425d-8a56-454b282d7d9f/manager/0.log" Jan 24 00:58:02 crc kubenswrapper[4676]: I0124 00:58:02.919069 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-g4d8z_dfc79179-d245-4360-be6e-8b43441e23ed/manager/0.log" Jan 24 00:58:03 crc kubenswrapper[4676]: I0124 00:58:03.102152 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n_4ce661f6-26e2-4da2-a759-e493a60587b2/manager/0.log" Jan 24 00:58:03 crc kubenswrapper[4676]: I0124 00:58:03.140994 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-wqbcz_dd6346d8-9cf1-4364-b480-f4c2d872472f/manager/0.log" Jan 24 00:58:03 crc kubenswrapper[4676]: I0124 00:58:03.345277 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-4xp45_df9ab5f0-f577-4303-8045-f960c67a6936/manager/0.log" Jan 24 00:58:03 crc kubenswrapper[4676]: I0124 00:58:03.362992 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-k8lw7_ced74bcb-8345-40c5-b2d4-3d369f30b835/manager/0.log" Jan 24 00:58:03 crc kubenswrapper[4676]: I0124 00:58:03.538068 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms_196f45b9-e656-4760-b058-e0b5c08a50d9/manager/0.log" Jan 24 00:58:03 crc kubenswrapper[4676]: I0124 00:58:03.726723 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-69647cdbc5-96fbr_07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6/operator/0.log" Jan 24 00:58:03 crc kubenswrapper[4676]: I0124 00:58:03.946832 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-kw9l8_25146f99-405a-4473-bf27-69a7195a3338/registry-server/0.log" Jan 24 00:58:04 crc kubenswrapper[4676]: I0124 00:58:04.248653 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-49nrq_53358678-d763-4b02-a157-86a57ebd0305/manager/0.log" Jan 24 00:58:04 crc kubenswrapper[4676]: I0124 00:58:04.395008 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-tdqwh_d85fa79d-818f-4079-aac4-f3fa51a90e9a/manager/0.log" Jan 24 00:58:04 crc kubenswrapper[4676]: I0124 00:58:04.632459 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-h44qw_fce4c8d0-b903-4873-8c89-2f4b9dd9c05d/operator/0.log" Jan 24 00:58:04 crc kubenswrapper[4676]: I0124 00:58:04.782609 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d5f8c4f48-md6zs_9d79c791-c851-4c4a-aa2d-d175b668b0f5/manager/0.log" Jan 24 00:58:04 crc kubenswrapper[4676]: I0124 00:58:04.852414 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5df95d5965-h8wx9_b0c8972b-31d7-40c1-bc65-1478718d41a5/manager/0.log" Jan 24 00:58:04 crc kubenswrapper[4676]: I0124 00:58:04.996936 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4c5zl_060e1c8d-dfa6-428f-bffe-d89ac3dab8c3/manager/0.log" Jan 24 00:58:05 crc kubenswrapper[4676]: I0124 00:58:05.086631 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-gqg82_6245b73e-9fba-4ad7-bbbc-31db48c03825/manager/0.log" Jan 24 00:58:05 crc kubenswrapper[4676]: I0124 00:58:05.283333 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-h6hzt_ccb1ff12-bef7-4f23-b084-fae32f8202ac/manager/0.log" Jan 24 00:58:08 crc kubenswrapper[4676]: I0124 00:58:08.256266 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:58:08 crc kubenswrapper[4676]: E0124 00:58:08.256718 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.801317 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bl9lk"] Jan 24 00:58:20 crc kubenswrapper[4676]: E0124 00:58:20.802311 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e133ba60-b33b-4289-a6a7-ef3827079dbb" containerName="container-00" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.802327 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="e133ba60-b33b-4289-a6a7-ef3827079dbb" containerName="container-00" Jan 24 00:58:20 crc kubenswrapper[4676]: E0124 00:58:20.802360 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerName="registry-server" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.802368 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerName="registry-server" Jan 24 00:58:20 crc kubenswrapper[4676]: E0124 00:58:20.802397 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerName="extract-utilities" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.802407 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerName="extract-utilities" Jan 24 00:58:20 crc kubenswrapper[4676]: E0124 00:58:20.802425 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerName="extract-content" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.802433 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerName="extract-content" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.802659 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="e133ba60-b33b-4289-a6a7-ef3827079dbb" containerName="container-00" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.802677 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b3d882-428b-47c9-97e6-9c26f4334f3e" containerName="registry-server" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.804291 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.822049 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl9lk"] Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.914034 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-utilities\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.914083 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k7qj\" (UniqueName: \"kubernetes.io/projected/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-kube-api-access-5k7qj\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:20 crc kubenswrapper[4676]: I0124 00:58:20.914192 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-catalog-content\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.016354 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-catalog-content\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.016536 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-utilities\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.016602 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k7qj\" (UniqueName: \"kubernetes.io/projected/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-kube-api-access-5k7qj\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.016841 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-catalog-content\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.016991 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-utilities\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.044429 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k7qj\" (UniqueName: \"kubernetes.io/projected/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-kube-api-access-5k7qj\") pod \"redhat-operators-bl9lk\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.140294 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.257912 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.524660 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"e910204e57fdf9a2ba78d40b8e9fd8506cf9f7184737c120d4af2680b61668e0"} Jan 24 00:58:21 crc kubenswrapper[4676]: I0124 00:58:21.622402 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl9lk"] Jan 24 00:58:21 crc kubenswrapper[4676]: W0124 00:58:21.638511 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8a1e5fc_eb67_49a7_b2cd_2b90bee197cc.slice/crio-0dba3689f66e0cb7d0e24e7df3a5c5d6dd8739276f75ac1dc841f5d60c3282bb WatchSource:0}: Error finding container 0dba3689f66e0cb7d0e24e7df3a5c5d6dd8739276f75ac1dc841f5d60c3282bb: Status 404 returned error can't find the container with id 0dba3689f66e0cb7d0e24e7df3a5c5d6dd8739276f75ac1dc841f5d60c3282bb Jan 24 00:58:22 crc kubenswrapper[4676]: I0124 00:58:22.532493 4676 generic.go:334] "Generic (PLEG): container finished" podID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerID="688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb" exitCode=0 Jan 24 00:58:22 crc kubenswrapper[4676]: I0124 00:58:22.532707 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl9lk" event={"ID":"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc","Type":"ContainerDied","Data":"688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb"} Jan 24 00:58:22 crc kubenswrapper[4676]: I0124 00:58:22.532877 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl9lk" event={"ID":"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc","Type":"ContainerStarted","Data":"0dba3689f66e0cb7d0e24e7df3a5c5d6dd8739276f75ac1dc841f5d60c3282bb"} Jan 24 00:58:22 crc kubenswrapper[4676]: I0124 00:58:22.538419 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 00:58:24 crc kubenswrapper[4676]: I0124 00:58:24.551502 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl9lk" event={"ID":"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc","Type":"ContainerStarted","Data":"969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81"} Jan 24 00:58:27 crc kubenswrapper[4676]: I0124 00:58:27.580881 4676 generic.go:334] "Generic (PLEG): container finished" podID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerID="969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81" exitCode=0 Jan 24 00:58:27 crc kubenswrapper[4676]: I0124 00:58:27.580979 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl9lk" event={"ID":"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc","Type":"ContainerDied","Data":"969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81"} Jan 24 00:58:28 crc kubenswrapper[4676]: I0124 00:58:28.228072 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mkwj8_f53afe6a-307c-4b0d-88cb-596703f35f8a/control-plane-machine-set-operator/0.log" Jan 24 00:58:28 crc kubenswrapper[4676]: I0124 00:58:28.380703 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jmv6d_9c51c973-c370-41e8-b167-25d3b11418bf/kube-rbac-proxy/0.log" Jan 24 00:58:28 crc kubenswrapper[4676]: I0124 00:58:28.483672 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jmv6d_9c51c973-c370-41e8-b167-25d3b11418bf/machine-api-operator/0.log" Jan 24 00:58:28 crc kubenswrapper[4676]: I0124 00:58:28.592785 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl9lk" event={"ID":"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc","Type":"ContainerStarted","Data":"9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c"} Jan 24 00:58:31 crc kubenswrapper[4676]: I0124 00:58:31.141427 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:31 crc kubenswrapper[4676]: I0124 00:58:31.149163 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:32 crc kubenswrapper[4676]: I0124 00:58:32.211477 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bl9lk" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="registry-server" probeResult="failure" output=< Jan 24 00:58:32 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 00:58:32 crc kubenswrapper[4676]: > Jan 24 00:58:41 crc kubenswrapper[4676]: I0124 00:58:41.190162 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:41 crc kubenswrapper[4676]: I0124 00:58:41.213632 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bl9lk" podStartSLOduration=15.719740757 podStartE2EDuration="21.21361411s" podCreationTimestamp="2026-01-24 00:58:20 +0000 UTC" firstStartedPulling="2026-01-24 00:58:22.5376986 +0000 UTC m=+3286.567669601" lastFinishedPulling="2026-01-24 00:58:28.031571923 +0000 UTC m=+3292.061542954" observedRunningTime="2026-01-24 00:58:28.617358812 +0000 UTC m=+3292.647329813" watchObservedRunningTime="2026-01-24 00:58:41.21361411 +0000 UTC m=+3305.243585111" Jan 24 00:58:41 crc kubenswrapper[4676]: I0124 00:58:41.245662 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:41 crc kubenswrapper[4676]: I0124 00:58:41.423624 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl9lk"] Jan 24 00:58:42 crc kubenswrapper[4676]: I0124 00:58:42.697819 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bl9lk" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="registry-server" containerID="cri-o://9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c" gracePeriod=2 Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.158611 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.311880 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-utilities\") pod \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.311955 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-catalog-content\") pod \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.312023 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k7qj\" (UniqueName: \"kubernetes.io/projected/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-kube-api-access-5k7qj\") pod \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\" (UID: \"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc\") " Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.313995 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-utilities" (OuterVolumeSpecName: "utilities") pod "b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" (UID: "b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.320490 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-kube-api-access-5k7qj" (OuterVolumeSpecName: "kube-api-access-5k7qj") pod "b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" (UID: "b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc"). InnerVolumeSpecName "kube-api-access-5k7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.414287 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.414538 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k7qj\" (UniqueName: \"kubernetes.io/projected/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-kube-api-access-5k7qj\") on node \"crc\" DevicePath \"\"" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.442881 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" (UID: "b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.516038 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.706461 4676 generic.go:334] "Generic (PLEG): container finished" podID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerID="9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c" exitCode=0 Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.706567 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl9lk" event={"ID":"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc","Type":"ContainerDied","Data":"9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c"} Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.707483 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl9lk" event={"ID":"b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc","Type":"ContainerDied","Data":"0dba3689f66e0cb7d0e24e7df3a5c5d6dd8739276f75ac1dc841f5d60c3282bb"} Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.706609 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl9lk" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.707604 4676 scope.go:117] "RemoveContainer" containerID="9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.739067 4676 scope.go:117] "RemoveContainer" containerID="969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.766933 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl9lk"] Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.777994 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bl9lk"] Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.784924 4676 scope.go:117] "RemoveContainer" containerID="688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.818811 4676 scope.go:117] "RemoveContainer" containerID="9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c" Jan 24 00:58:43 crc kubenswrapper[4676]: E0124 00:58:43.819256 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c\": container with ID starting with 9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c not found: ID does not exist" containerID="9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.819299 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c"} err="failed to get container status \"9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c\": rpc error: code = NotFound desc = could not find container \"9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c\": container with ID starting with 9d43661722191fb8d28de5b7542e25f9042c82a9ed08eb79ea6a8b5d68859f6c not found: ID does not exist" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.819328 4676 scope.go:117] "RemoveContainer" containerID="969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81" Jan 24 00:58:43 crc kubenswrapper[4676]: E0124 00:58:43.819647 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81\": container with ID starting with 969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81 not found: ID does not exist" containerID="969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.819677 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81"} err="failed to get container status \"969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81\": rpc error: code = NotFound desc = could not find container \"969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81\": container with ID starting with 969329c3edcedad71aad0fb3d89acc2cef29203d9541bdbe7d5e06de7ec62a81 not found: ID does not exist" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.819701 4676 scope.go:117] "RemoveContainer" containerID="688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb" Jan 24 00:58:43 crc kubenswrapper[4676]: E0124 00:58:43.819899 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb\": container with ID starting with 688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb not found: ID does not exist" containerID="688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb" Jan 24 00:58:43 crc kubenswrapper[4676]: I0124 00:58:43.819927 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb"} err="failed to get container status \"688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb\": rpc error: code = NotFound desc = could not find container \"688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb\": container with ID starting with 688d8fdbedb79d38d0104bb243d58508f9226f14b06d510893fa4ef3271182eb not found: ID does not exist" Jan 24 00:58:44 crc kubenswrapper[4676]: I0124 00:58:44.265233 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" path="/var/lib/kubelet/pods/b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc/volumes" Jan 24 00:58:44 crc kubenswrapper[4676]: I0124 00:58:44.392444 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-jws6v_1b51f181-5a85-4c07-b259-f67d17bf1134/cert-manager-controller/0.log" Jan 24 00:58:44 crc kubenswrapper[4676]: I0124 00:58:44.669325 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-s27fj_6db681a1-6165-4404-81a9-a189e9b30bfd/cert-manager-cainjector/0.log" Jan 24 00:58:44 crc kubenswrapper[4676]: I0124 00:58:44.694499 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-w7kb7_8f43788b-d559-4623-ae87-a820a2f23b08/cert-manager-webhook/0.log" Jan 24 00:59:00 crc kubenswrapper[4676]: I0124 00:59:00.387997 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-95988_4db34b09-85e6-435d-b991-c2513eec5d17/nmstate-console-plugin/0.log" Jan 24 00:59:00 crc kubenswrapper[4676]: I0124 00:59:00.930549 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lkqsz_6bdbc24e-ee52-41f0-9aaf-1091ac803c27/nmstate-handler/0.log" Jan 24 00:59:01 crc kubenswrapper[4676]: I0124 00:59:01.010900 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rv2q7_0c99caac-d9eb-494c-bd04-c18dbc8a0844/kube-rbac-proxy/0.log" Jan 24 00:59:01 crc kubenswrapper[4676]: I0124 00:59:01.117542 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rv2q7_0c99caac-d9eb-494c-bd04-c18dbc8a0844/nmstate-metrics/0.log" Jan 24 00:59:01 crc kubenswrapper[4676]: I0124 00:59:01.337248 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-cpbrv_badb3470-b60b-44ca-8d9e-52191ea016fa/nmstate-webhook/0.log" Jan 24 00:59:01 crc kubenswrapper[4676]: I0124 00:59:01.462832 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jsbxx_efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2/nmstate-operator/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.185042 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jqk4f_f5517856-fac7-4312-ab46-86bbd5c1282d/kube-rbac-proxy/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.232102 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jqk4f_f5517856-fac7-4312-ab46-86bbd5c1282d/controller/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.433839 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-frr-files/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.628722 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-frr-files/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.711199 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-reloader/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.733091 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-metrics/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.739024 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-reloader/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.921940 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-frr-files/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.946846 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-metrics/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.954046 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-reloader/0.log" Jan 24 00:59:32 crc kubenswrapper[4676]: I0124 00:59:32.999848 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-metrics/0.log" Jan 24 00:59:33 crc kubenswrapper[4676]: I0124 00:59:33.291814 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-reloader/0.log" Jan 24 00:59:33 crc kubenswrapper[4676]: I0124 00:59:33.298185 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-frr-files/0.log" Jan 24 00:59:33 crc kubenswrapper[4676]: I0124 00:59:33.327871 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-metrics/0.log" Jan 24 00:59:33 crc kubenswrapper[4676]: I0124 00:59:33.339473 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/controller/0.log" Jan 24 00:59:33 crc kubenswrapper[4676]: I0124 00:59:33.539144 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/frr-metrics/0.log" Jan 24 00:59:33 crc kubenswrapper[4676]: I0124 00:59:33.540682 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/kube-rbac-proxy/0.log" Jan 24 00:59:33 crc kubenswrapper[4676]: I0124 00:59:33.683661 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/kube-rbac-proxy-frr/0.log" Jan 24 00:59:34 crc kubenswrapper[4676]: I0124 00:59:34.107237 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/reloader/0.log" Jan 24 00:59:34 crc kubenswrapper[4676]: I0124 00:59:34.197862 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-s7b59_2ce22d83-ee4f-4ad7-8882-b876d4ed52a2/frr-k8s-webhook-server/0.log" Jan 24 00:59:34 crc kubenswrapper[4676]: I0124 00:59:34.520124 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6868db74d6-qfhwb_59200cad-bd1a-472a-a1a1-adccb5211b21/manager/0.log" Jan 24 00:59:34 crc kubenswrapper[4676]: I0124 00:59:34.613009 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/frr/0.log" Jan 24 00:59:34 crc kubenswrapper[4676]: I0124 00:59:34.649371 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6dbffff8f5-cb5l5_adfe0dac-5ac5-44b8-97db-088d1ac83d34/webhook-server/0.log" Jan 24 00:59:34 crc kubenswrapper[4676]: I0124 00:59:34.767695 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zh6td_7da3019e-01de-4671-a78f-6c0d2e57fde3/kube-rbac-proxy/0.log" Jan 24 00:59:35 crc kubenswrapper[4676]: I0124 00:59:35.086796 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zh6td_7da3019e-01de-4671-a78f-6c0d2e57fde3/speaker/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.085016 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/util/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.319443 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/util/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.376204 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/pull/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.391459 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/pull/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.533576 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/util/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.543634 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/pull/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.682004 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/extract/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.768217 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/util/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.918897 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/pull/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.926013 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/pull/0.log" Jan 24 00:59:49 crc kubenswrapper[4676]: I0124 00:59:49.966519 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/util/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.084167 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/pull/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.128666 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/extract/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.146860 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/util/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.263448 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-utilities/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.490741 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-utilities/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.506660 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-content/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.527791 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-content/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.733614 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-utilities/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.767533 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-content/0.log" Jan 24 00:59:50 crc kubenswrapper[4676]: I0124 00:59:50.982194 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/registry-server/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.020111 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-utilities/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.191288 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-content/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.213562 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-content/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.216759 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-utilities/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.442715 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-utilities/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.475395 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-content/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.793981 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kwptt_8db31db6-2c7c-4688-89c7-328024cd7003/marketplace-operator/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.933784 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-utilities/0.log" Jan 24 00:59:51 crc kubenswrapper[4676]: I0124 00:59:51.953183 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/registry-server/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.115529 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-utilities/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.136412 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-content/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.164862 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-content/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.399257 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-content/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.445845 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-utilities/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.504528 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/registry-server/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.652348 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-utilities/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.817784 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-content/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.854253 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-utilities/0.log" Jan 24 00:59:52 crc kubenswrapper[4676]: I0124 00:59:52.871134 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-content/0.log" Jan 24 00:59:53 crc kubenswrapper[4676]: I0124 00:59:53.052198 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-utilities/0.log" Jan 24 00:59:53 crc kubenswrapper[4676]: I0124 00:59:53.064355 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-content/0.log" Jan 24 00:59:53 crc kubenswrapper[4676]: I0124 00:59:53.449684 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/registry-server/0.log" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.166048 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2"] Jan 24 01:00:00 crc kubenswrapper[4676]: E0124 01:00:00.167701 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="extract-utilities" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.167728 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="extract-utilities" Jan 24 01:00:00 crc kubenswrapper[4676]: E0124 01:00:00.167745 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="registry-server" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.167757 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="registry-server" Jan 24 01:00:00 crc kubenswrapper[4676]: E0124 01:00:00.167799 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="extract-content" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.167810 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="extract-content" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.168159 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a1e5fc-eb67-49a7-b2cd-2b90bee197cc" containerName="registry-server" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.169133 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.171510 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.171565 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.177256 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2"] Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.263937 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwpk\" (UniqueName: \"kubernetes.io/projected/13892a91-56d1-49b9-9218-2c852080bfdb-kube-api-access-djwpk\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.264642 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13892a91-56d1-49b9-9218-2c852080bfdb-config-volume\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.264699 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13892a91-56d1-49b9-9218-2c852080bfdb-secret-volume\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.367607 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwpk\" (UniqueName: \"kubernetes.io/projected/13892a91-56d1-49b9-9218-2c852080bfdb-kube-api-access-djwpk\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.367677 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13892a91-56d1-49b9-9218-2c852080bfdb-config-volume\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.367794 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13892a91-56d1-49b9-9218-2c852080bfdb-secret-volume\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.370602 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13892a91-56d1-49b9-9218-2c852080bfdb-config-volume\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.384157 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13892a91-56d1-49b9-9218-2c852080bfdb-secret-volume\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.387673 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwpk\" (UniqueName: \"kubernetes.io/projected/13892a91-56d1-49b9-9218-2c852080bfdb-kube-api-access-djwpk\") pod \"collect-profiles-29486940-s78l2\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.496326 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:00 crc kubenswrapper[4676]: I0124 01:00:00.937064 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2"] Jan 24 01:00:00 crc kubenswrapper[4676]: W0124 01:00:00.947634 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13892a91_56d1_49b9_9218_2c852080bfdb.slice/crio-d92f79a693b27138ac891230baed68cfe599a074ee62367fdafee3f9f2dae67c WatchSource:0}: Error finding container d92f79a693b27138ac891230baed68cfe599a074ee62367fdafee3f9f2dae67c: Status 404 returned error can't find the container with id d92f79a693b27138ac891230baed68cfe599a074ee62367fdafee3f9f2dae67c Jan 24 01:00:01 crc kubenswrapper[4676]: I0124 01:00:01.340687 4676 generic.go:334] "Generic (PLEG): container finished" podID="13892a91-56d1-49b9-9218-2c852080bfdb" containerID="a1167e023def9f55810d9e752100b280bff41604a70b8d00cf03ee7930fce39d" exitCode=0 Jan 24 01:00:01 crc kubenswrapper[4676]: I0124 01:00:01.340758 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" event={"ID":"13892a91-56d1-49b9-9218-2c852080bfdb","Type":"ContainerDied","Data":"a1167e023def9f55810d9e752100b280bff41604a70b8d00cf03ee7930fce39d"} Jan 24 01:00:01 crc kubenswrapper[4676]: I0124 01:00:01.340977 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" event={"ID":"13892a91-56d1-49b9-9218-2c852080bfdb","Type":"ContainerStarted","Data":"d92f79a693b27138ac891230baed68cfe599a074ee62367fdafee3f9f2dae67c"} Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.718221 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.810670 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13892a91-56d1-49b9-9218-2c852080bfdb-secret-volume\") pod \"13892a91-56d1-49b9-9218-2c852080bfdb\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.810726 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13892a91-56d1-49b9-9218-2c852080bfdb-config-volume\") pod \"13892a91-56d1-49b9-9218-2c852080bfdb\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.810894 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwpk\" (UniqueName: \"kubernetes.io/projected/13892a91-56d1-49b9-9218-2c852080bfdb-kube-api-access-djwpk\") pod \"13892a91-56d1-49b9-9218-2c852080bfdb\" (UID: \"13892a91-56d1-49b9-9218-2c852080bfdb\") " Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.811961 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13892a91-56d1-49b9-9218-2c852080bfdb-config-volume" (OuterVolumeSpecName: "config-volume") pod "13892a91-56d1-49b9-9218-2c852080bfdb" (UID: "13892a91-56d1-49b9-9218-2c852080bfdb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.817986 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13892a91-56d1-49b9-9218-2c852080bfdb-kube-api-access-djwpk" (OuterVolumeSpecName: "kube-api-access-djwpk") pod "13892a91-56d1-49b9-9218-2c852080bfdb" (UID: "13892a91-56d1-49b9-9218-2c852080bfdb"). InnerVolumeSpecName "kube-api-access-djwpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.818493 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13892a91-56d1-49b9-9218-2c852080bfdb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "13892a91-56d1-49b9-9218-2c852080bfdb" (UID: "13892a91-56d1-49b9-9218-2c852080bfdb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.913237 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djwpk\" (UniqueName: \"kubernetes.io/projected/13892a91-56d1-49b9-9218-2c852080bfdb-kube-api-access-djwpk\") on node \"crc\" DevicePath \"\"" Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.913275 4676 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13892a91-56d1-49b9-9218-2c852080bfdb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 01:00:02 crc kubenswrapper[4676]: I0124 01:00:02.913287 4676 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13892a91-56d1-49b9-9218-2c852080bfdb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 01:00:03 crc kubenswrapper[4676]: I0124 01:00:03.360511 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" event={"ID":"13892a91-56d1-49b9-9218-2c852080bfdb","Type":"ContainerDied","Data":"d92f79a693b27138ac891230baed68cfe599a074ee62367fdafee3f9f2dae67c"} Jan 24 01:00:03 crc kubenswrapper[4676]: I0124 01:00:03.360550 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92f79a693b27138ac891230baed68cfe599a074ee62367fdafee3f9f2dae67c" Jan 24 01:00:03 crc kubenswrapper[4676]: I0124 01:00:03.360598 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486940-s78l2" Jan 24 01:00:03 crc kubenswrapper[4676]: I0124 01:00:03.813334 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p"] Jan 24 01:00:03 crc kubenswrapper[4676]: I0124 01:00:03.825255 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486895-5v67p"] Jan 24 01:00:04 crc kubenswrapper[4676]: I0124 01:00:04.269216 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ef8e58-50e2-4cc5-b92c-35710605e5b1" path="/var/lib/kubelet/pods/40ef8e58-50e2-4cc5-b92c-35710605e5b1/volumes" Jan 24 01:00:25 crc kubenswrapper[4676]: E0124 01:00:25.369851 4676 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.27:59720->38.102.83.27:34811: write tcp 38.102.83.27:59720->38.102.83.27:34811: write: broken pipe Jan 24 01:00:39 crc kubenswrapper[4676]: I0124 01:00:39.364726 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:00:39 crc kubenswrapper[4676]: I0124 01:00:39.365210 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:00:43 crc kubenswrapper[4676]: I0124 01:00:43.272788 4676 scope.go:117] "RemoveContainer" containerID="1f9529fca33e87b8d8a2cf842fe194d301e46a27673eb8a957977d0bcb8f4f8d" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.157550 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486941-6wtvm"] Jan 24 01:01:00 crc kubenswrapper[4676]: E0124 01:01:00.158777 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13892a91-56d1-49b9-9218-2c852080bfdb" containerName="collect-profiles" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.158801 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="13892a91-56d1-49b9-9218-2c852080bfdb" containerName="collect-profiles" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.159189 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="13892a91-56d1-49b9-9218-2c852080bfdb" containerName="collect-profiles" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.160266 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.174344 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486941-6wtvm"] Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.247570 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrql\" (UniqueName: \"kubernetes.io/projected/483961de-208d-4593-a6c8-ecee687b7c06-kube-api-access-mdrql\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.247752 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-config-data\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.247779 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-fernet-keys\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.247813 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-combined-ca-bundle\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.350428 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-config-data\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.350483 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-fernet-keys\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.350519 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-combined-ca-bundle\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.350619 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdrql\" (UniqueName: \"kubernetes.io/projected/483961de-208d-4593-a6c8-ecee687b7c06-kube-api-access-mdrql\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.358792 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-combined-ca-bundle\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.362684 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-fernet-keys\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.370201 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-config-data\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.392797 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdrql\" (UniqueName: \"kubernetes.io/projected/483961de-208d-4593-a6c8-ecee687b7c06-kube-api-access-mdrql\") pod \"keystone-cron-29486941-6wtvm\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.497082 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:00 crc kubenswrapper[4676]: I0124 01:01:00.960583 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486941-6wtvm"] Jan 24 01:01:01 crc kubenswrapper[4676]: I0124 01:01:01.964953 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486941-6wtvm" event={"ID":"483961de-208d-4593-a6c8-ecee687b7c06","Type":"ContainerStarted","Data":"f64c0d4c1559404ae786b4129234e03dc73707e5aea1115b52503a990f8b5b98"} Jan 24 01:01:01 crc kubenswrapper[4676]: I0124 01:01:01.965303 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486941-6wtvm" event={"ID":"483961de-208d-4593-a6c8-ecee687b7c06","Type":"ContainerStarted","Data":"2a967c567bd8fd03d2dede0c91a2bbb28ebdd22ba30ad8ef3f097c16c69a4a6e"} Jan 24 01:01:03 crc kubenswrapper[4676]: I0124 01:01:03.983151 4676 generic.go:334] "Generic (PLEG): container finished" podID="483961de-208d-4593-a6c8-ecee687b7c06" containerID="f64c0d4c1559404ae786b4129234e03dc73707e5aea1115b52503a990f8b5b98" exitCode=0 Jan 24 01:01:03 crc kubenswrapper[4676]: I0124 01:01:03.983259 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486941-6wtvm" event={"ID":"483961de-208d-4593-a6c8-ecee687b7c06","Type":"ContainerDied","Data":"f64c0d4c1559404ae786b4129234e03dc73707e5aea1115b52503a990f8b5b98"} Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.342026 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.453629 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdrql\" (UniqueName: \"kubernetes.io/projected/483961de-208d-4593-a6c8-ecee687b7c06-kube-api-access-mdrql\") pod \"483961de-208d-4593-a6c8-ecee687b7c06\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.453994 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-fernet-keys\") pod \"483961de-208d-4593-a6c8-ecee687b7c06\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.454057 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-combined-ca-bundle\") pod \"483961de-208d-4593-a6c8-ecee687b7c06\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.454203 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-config-data\") pod \"483961de-208d-4593-a6c8-ecee687b7c06\" (UID: \"483961de-208d-4593-a6c8-ecee687b7c06\") " Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.462838 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "483961de-208d-4593-a6c8-ecee687b7c06" (UID: "483961de-208d-4593-a6c8-ecee687b7c06"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.464581 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483961de-208d-4593-a6c8-ecee687b7c06-kube-api-access-mdrql" (OuterVolumeSpecName: "kube-api-access-mdrql") pod "483961de-208d-4593-a6c8-ecee687b7c06" (UID: "483961de-208d-4593-a6c8-ecee687b7c06"). InnerVolumeSpecName "kube-api-access-mdrql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.490635 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "483961de-208d-4593-a6c8-ecee687b7c06" (UID: "483961de-208d-4593-a6c8-ecee687b7c06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.523683 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-config-data" (OuterVolumeSpecName: "config-data") pod "483961de-208d-4593-a6c8-ecee687b7c06" (UID: "483961de-208d-4593-a6c8-ecee687b7c06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.557537 4676 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.557576 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdrql\" (UniqueName: \"kubernetes.io/projected/483961de-208d-4593-a6c8-ecee687b7c06-kube-api-access-mdrql\") on node \"crc\" DevicePath \"\"" Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.557587 4676 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 01:01:05 crc kubenswrapper[4676]: I0124 01:01:05.557596 4676 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483961de-208d-4593-a6c8-ecee687b7c06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 01:01:06 crc kubenswrapper[4676]: I0124 01:01:06.001896 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486941-6wtvm" event={"ID":"483961de-208d-4593-a6c8-ecee687b7c06","Type":"ContainerDied","Data":"2a967c567bd8fd03d2dede0c91a2bbb28ebdd22ba30ad8ef3f097c16c69a4a6e"} Jan 24 01:01:06 crc kubenswrapper[4676]: I0124 01:01:06.001933 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a967c567bd8fd03d2dede0c91a2bbb28ebdd22ba30ad8ef3f097c16c69a4a6e" Jan 24 01:01:06 crc kubenswrapper[4676]: I0124 01:01:06.001982 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486941-6wtvm" Jan 24 01:01:09 crc kubenswrapper[4676]: I0124 01:01:09.364291 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:01:09 crc kubenswrapper[4676]: I0124 01:01:09.365024 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:01:39 crc kubenswrapper[4676]: I0124 01:01:39.363879 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:01:39 crc kubenswrapper[4676]: I0124 01:01:39.364527 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:01:39 crc kubenswrapper[4676]: I0124 01:01:39.364590 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 01:01:39 crc kubenswrapper[4676]: I0124 01:01:39.365588 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e910204e57fdf9a2ba78d40b8e9fd8506cf9f7184737c120d4af2680b61668e0"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 01:01:39 crc kubenswrapper[4676]: I0124 01:01:39.365684 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://e910204e57fdf9a2ba78d40b8e9fd8506cf9f7184737c120d4af2680b61668e0" gracePeriod=600 Jan 24 01:01:40 crc kubenswrapper[4676]: I0124 01:01:40.440901 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="e910204e57fdf9a2ba78d40b8e9fd8506cf9f7184737c120d4af2680b61668e0" exitCode=0 Jan 24 01:01:40 crc kubenswrapper[4676]: I0124 01:01:40.440963 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"e910204e57fdf9a2ba78d40b8e9fd8506cf9f7184737c120d4af2680b61668e0"} Jan 24 01:01:40 crc kubenswrapper[4676]: I0124 01:01:40.441405 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9"} Jan 24 01:01:40 crc kubenswrapper[4676]: I0124 01:01:40.441442 4676 scope.go:117] "RemoveContainer" containerID="3028d9f3374dc67ac159b532c9f99a34d8cb2b7f6bd88b8c4da01f1590e10398" Jan 24 01:01:41 crc kubenswrapper[4676]: I0124 01:01:41.458637 4676 generic.go:334] "Generic (PLEG): container finished" podID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerID="e762fbd56fb888c091ce2f8ff2b6ad8a52059af7b071066cce8729c1ec864419" exitCode=0 Jan 24 01:01:41 crc kubenswrapper[4676]: I0124 01:01:41.458724 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57xmz/must-gather-hv4gr" event={"ID":"a646626a-4429-4d8a-9f75-7b8bfade90ec","Type":"ContainerDied","Data":"e762fbd56fb888c091ce2f8ff2b6ad8a52059af7b071066cce8729c1ec864419"} Jan 24 01:01:41 crc kubenswrapper[4676]: I0124 01:01:41.460304 4676 scope.go:117] "RemoveContainer" containerID="e762fbd56fb888c091ce2f8ff2b6ad8a52059af7b071066cce8729c1ec864419" Jan 24 01:01:41 crc kubenswrapper[4676]: I0124 01:01:41.872143 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57xmz_must-gather-hv4gr_a646626a-4429-4d8a-9f75-7b8bfade90ec/gather/0.log" Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.296804 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57xmz/must-gather-hv4gr"] Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.297552 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-57xmz/must-gather-hv4gr" podUID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerName="copy" containerID="cri-o://5282e9521531b7fbcb12f5cd407d932cd2321b1dd6ad9e7d8b88355c7f251d0b" gracePeriod=2 Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.336190 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57xmz/must-gather-hv4gr"] Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.563931 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57xmz_must-gather-hv4gr_a646626a-4429-4d8a-9f75-7b8bfade90ec/copy/0.log" Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.564570 4676 generic.go:334] "Generic (PLEG): container finished" podID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerID="5282e9521531b7fbcb12f5cd407d932cd2321b1dd6ad9e7d8b88355c7f251d0b" exitCode=143 Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.825186 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57xmz_must-gather-hv4gr_a646626a-4429-4d8a-9f75-7b8bfade90ec/copy/0.log" Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.825819 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.907238 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbztc\" (UniqueName: \"kubernetes.io/projected/a646626a-4429-4d8a-9f75-7b8bfade90ec-kube-api-access-jbztc\") pod \"a646626a-4429-4d8a-9f75-7b8bfade90ec\" (UID: \"a646626a-4429-4d8a-9f75-7b8bfade90ec\") " Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.907437 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a646626a-4429-4d8a-9f75-7b8bfade90ec-must-gather-output\") pod \"a646626a-4429-4d8a-9f75-7b8bfade90ec\" (UID: \"a646626a-4429-4d8a-9f75-7b8bfade90ec\") " Jan 24 01:01:50 crc kubenswrapper[4676]: I0124 01:01:50.913077 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a646626a-4429-4d8a-9f75-7b8bfade90ec-kube-api-access-jbztc" (OuterVolumeSpecName: "kube-api-access-jbztc") pod "a646626a-4429-4d8a-9f75-7b8bfade90ec" (UID: "a646626a-4429-4d8a-9f75-7b8bfade90ec"). InnerVolumeSpecName "kube-api-access-jbztc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:01:51 crc kubenswrapper[4676]: I0124 01:01:51.009031 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbztc\" (UniqueName: \"kubernetes.io/projected/a646626a-4429-4d8a-9f75-7b8bfade90ec-kube-api-access-jbztc\") on node \"crc\" DevicePath \"\"" Jan 24 01:01:51 crc kubenswrapper[4676]: I0124 01:01:51.056722 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a646626a-4429-4d8a-9f75-7b8bfade90ec-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a646626a-4429-4d8a-9f75-7b8bfade90ec" (UID: "a646626a-4429-4d8a-9f75-7b8bfade90ec"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:01:51 crc kubenswrapper[4676]: I0124 01:01:51.111086 4676 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a646626a-4429-4d8a-9f75-7b8bfade90ec-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 01:01:51 crc kubenswrapper[4676]: I0124 01:01:51.578149 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57xmz_must-gather-hv4gr_a646626a-4429-4d8a-9f75-7b8bfade90ec/copy/0.log" Jan 24 01:01:51 crc kubenswrapper[4676]: I0124 01:01:51.579059 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57xmz/must-gather-hv4gr" Jan 24 01:01:51 crc kubenswrapper[4676]: I0124 01:01:51.579052 4676 scope.go:117] "RemoveContainer" containerID="5282e9521531b7fbcb12f5cd407d932cd2321b1dd6ad9e7d8b88355c7f251d0b" Jan 24 01:01:51 crc kubenswrapper[4676]: I0124 01:01:51.608269 4676 scope.go:117] "RemoveContainer" containerID="e762fbd56fb888c091ce2f8ff2b6ad8a52059af7b071066cce8729c1ec864419" Jan 24 01:01:52 crc kubenswrapper[4676]: I0124 01:01:52.274054 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a646626a-4429-4d8a-9f75-7b8bfade90ec" path="/var/lib/kubelet/pods/a646626a-4429-4d8a-9f75-7b8bfade90ec/volumes" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.345512 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dg4k2"] Jan 24 01:02:33 crc kubenswrapper[4676]: E0124 01:02:33.346410 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483961de-208d-4593-a6c8-ecee687b7c06" containerName="keystone-cron" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.346423 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="483961de-208d-4593-a6c8-ecee687b7c06" containerName="keystone-cron" Jan 24 01:02:33 crc kubenswrapper[4676]: E0124 01:02:33.346437 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerName="copy" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.346442 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerName="copy" Jan 24 01:02:33 crc kubenswrapper[4676]: E0124 01:02:33.346450 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerName="gather" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.346456 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerName="gather" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.346626 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerName="gather" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.346641 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="483961de-208d-4593-a6c8-ecee687b7c06" containerName="keystone-cron" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.346657 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="a646626a-4429-4d8a-9f75-7b8bfade90ec" containerName="copy" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.348692 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.354821 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg4k2"] Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.460539 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-catalog-content\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.460600 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fptfh\" (UniqueName: \"kubernetes.io/projected/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-kube-api-access-fptfh\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.460640 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-utilities\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.562522 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-catalog-content\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.562575 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fptfh\" (UniqueName: \"kubernetes.io/projected/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-kube-api-access-fptfh\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.562616 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-utilities\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.563027 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-utilities\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.563214 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-catalog-content\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.591113 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fptfh\" (UniqueName: \"kubernetes.io/projected/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-kube-api-access-fptfh\") pod \"redhat-marketplace-dg4k2\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:33 crc kubenswrapper[4676]: I0124 01:02:33.666474 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:34 crc kubenswrapper[4676]: I0124 01:02:34.146361 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg4k2"] Jan 24 01:02:35 crc kubenswrapper[4676]: I0124 01:02:35.022362 4676 generic.go:334] "Generic (PLEG): container finished" podID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerID="1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36" exitCode=0 Jan 24 01:02:35 crc kubenswrapper[4676]: I0124 01:02:35.022529 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg4k2" event={"ID":"04f6e0b9-60c2-4233-9ae1-466f0d9856f7","Type":"ContainerDied","Data":"1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36"} Jan 24 01:02:35 crc kubenswrapper[4676]: I0124 01:02:35.023024 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg4k2" event={"ID":"04f6e0b9-60c2-4233-9ae1-466f0d9856f7","Type":"ContainerStarted","Data":"26d6623bd7a9f3a00f3aeb656c92a96d75069eab3735f0a291f1ae75f99a99cc"} Jan 24 01:02:36 crc kubenswrapper[4676]: I0124 01:02:36.036397 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg4k2" event={"ID":"04f6e0b9-60c2-4233-9ae1-466f0d9856f7","Type":"ContainerStarted","Data":"8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155"} Jan 24 01:02:37 crc kubenswrapper[4676]: I0124 01:02:37.049682 4676 generic.go:334] "Generic (PLEG): container finished" podID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerID="8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155" exitCode=0 Jan 24 01:02:37 crc kubenswrapper[4676]: I0124 01:02:37.049783 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg4k2" event={"ID":"04f6e0b9-60c2-4233-9ae1-466f0d9856f7","Type":"ContainerDied","Data":"8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155"} Jan 24 01:02:38 crc kubenswrapper[4676]: I0124 01:02:38.073164 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg4k2" event={"ID":"04f6e0b9-60c2-4233-9ae1-466f0d9856f7","Type":"ContainerStarted","Data":"c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93"} Jan 24 01:02:38 crc kubenswrapper[4676]: I0124 01:02:38.103531 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dg4k2" podStartSLOduration=2.604491114 podStartE2EDuration="5.103515592s" podCreationTimestamp="2026-01-24 01:02:33 +0000 UTC" firstStartedPulling="2026-01-24 01:02:35.024477618 +0000 UTC m=+3539.054448659" lastFinishedPulling="2026-01-24 01:02:37.523502126 +0000 UTC m=+3541.553473137" observedRunningTime="2026-01-24 01:02:38.099095085 +0000 UTC m=+3542.129066076" watchObservedRunningTime="2026-01-24 01:02:38.103515592 +0000 UTC m=+3542.133486593" Jan 24 01:02:43 crc kubenswrapper[4676]: I0124 01:02:43.412159 4676 scope.go:117] "RemoveContainer" containerID="fb528b96bb0ea9229c8a0baf4db69e4aad9f43be8642c6a3de48a10855ae6586" Jan 24 01:02:43 crc kubenswrapper[4676]: I0124 01:02:43.667704 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:43 crc kubenswrapper[4676]: I0124 01:02:43.667760 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:43 crc kubenswrapper[4676]: I0124 01:02:43.720984 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:44 crc kubenswrapper[4676]: I0124 01:02:44.179553 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:45 crc kubenswrapper[4676]: I0124 01:02:45.135578 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg4k2"] Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.153228 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dg4k2" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerName="registry-server" containerID="cri-o://c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93" gracePeriod=2 Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.616624 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.647962 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fptfh\" (UniqueName: \"kubernetes.io/projected/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-kube-api-access-fptfh\") pod \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.648176 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-catalog-content\") pod \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.648298 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-utilities\") pod \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\" (UID: \"04f6e0b9-60c2-4233-9ae1-466f0d9856f7\") " Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.651701 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-utilities" (OuterVolumeSpecName: "utilities") pod "04f6e0b9-60c2-4233-9ae1-466f0d9856f7" (UID: "04f6e0b9-60c2-4233-9ae1-466f0d9856f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.677570 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-kube-api-access-fptfh" (OuterVolumeSpecName: "kube-api-access-fptfh") pod "04f6e0b9-60c2-4233-9ae1-466f0d9856f7" (UID: "04f6e0b9-60c2-4233-9ae1-466f0d9856f7"). InnerVolumeSpecName "kube-api-access-fptfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.750420 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.750451 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fptfh\" (UniqueName: \"kubernetes.io/projected/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-kube-api-access-fptfh\") on node \"crc\" DevicePath \"\"" Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.806221 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04f6e0b9-60c2-4233-9ae1-466f0d9856f7" (UID: "04f6e0b9-60c2-4233-9ae1-466f0d9856f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:02:46 crc kubenswrapper[4676]: I0124 01:02:46.853573 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f6e0b9-60c2-4233-9ae1-466f0d9856f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.163990 4676 generic.go:334] "Generic (PLEG): container finished" podID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerID="c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93" exitCode=0 Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.164068 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg4k2" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.164092 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg4k2" event={"ID":"04f6e0b9-60c2-4233-9ae1-466f0d9856f7","Type":"ContainerDied","Data":"c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93"} Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.165488 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg4k2" event={"ID":"04f6e0b9-60c2-4233-9ae1-466f0d9856f7","Type":"ContainerDied","Data":"26d6623bd7a9f3a00f3aeb656c92a96d75069eab3735f0a291f1ae75f99a99cc"} Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.165594 4676 scope.go:117] "RemoveContainer" containerID="c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.210223 4676 scope.go:117] "RemoveContainer" containerID="8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.213259 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg4k2"] Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.226831 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg4k2"] Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.236156 4676 scope.go:117] "RemoveContainer" containerID="1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.278890 4676 scope.go:117] "RemoveContainer" containerID="c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93" Jan 24 01:02:47 crc kubenswrapper[4676]: E0124 01:02:47.279342 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93\": container with ID starting with c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93 not found: ID does not exist" containerID="c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.279504 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93"} err="failed to get container status \"c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93\": rpc error: code = NotFound desc = could not find container \"c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93\": container with ID starting with c92d3b1e742077bc21525575f682ff78f7de491d3f8231de5ae308c3c85f8e93 not found: ID does not exist" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.279549 4676 scope.go:117] "RemoveContainer" containerID="8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155" Jan 24 01:02:47 crc kubenswrapper[4676]: E0124 01:02:47.280845 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155\": container with ID starting with 8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155 not found: ID does not exist" containerID="8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.281312 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155"} err="failed to get container status \"8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155\": rpc error: code = NotFound desc = could not find container \"8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155\": container with ID starting with 8d31fcde81cecd6db5fb594b425698747416b475844d88a2e2664f5bd76c2155 not found: ID does not exist" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.281715 4676 scope.go:117] "RemoveContainer" containerID="1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36" Jan 24 01:02:47 crc kubenswrapper[4676]: E0124 01:02:47.282728 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36\": container with ID starting with 1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36 not found: ID does not exist" containerID="1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36" Jan 24 01:02:47 crc kubenswrapper[4676]: I0124 01:02:47.282816 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36"} err="failed to get container status \"1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36\": rpc error: code = NotFound desc = could not find container \"1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36\": container with ID starting with 1d59967b92bf2bb19b28ac9354c7eaf360e79496f4387e20edf7d4d9289aee36 not found: ID does not exist" Jan 24 01:02:48 crc kubenswrapper[4676]: I0124 01:02:48.283712 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" path="/var/lib/kubelet/pods/04f6e0b9-60c2-4233-9ae1-466f0d9856f7/volumes" Jan 24 01:03:39 crc kubenswrapper[4676]: I0124 01:03:39.364234 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:03:39 crc kubenswrapper[4676]: I0124 01:03:39.364927 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:04:09 crc kubenswrapper[4676]: I0124 01:04:09.364030 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:04:09 crc kubenswrapper[4676]: I0124 01:04:09.364912 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.523370 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8qqfp"] Jan 24 01:04:14 crc kubenswrapper[4676]: E0124 01:04:14.524615 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerName="registry-server" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.524639 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerName="registry-server" Jan 24 01:04:14 crc kubenswrapper[4676]: E0124 01:04:14.524674 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerName="extract-utilities" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.524686 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerName="extract-utilities" Jan 24 01:04:14 crc kubenswrapper[4676]: E0124 01:04:14.524711 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerName="extract-content" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.524722 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerName="extract-content" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.525027 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f6e0b9-60c2-4233-9ae1-466f0d9856f7" containerName="registry-server" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.526866 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.539560 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qqfp"] Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.657862 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-catalog-content\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.657984 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-utilities\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.658019 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr5km\" (UniqueName: \"kubernetes.io/projected/45ec6f03-36a9-4179-aee2-d3ba73569525-kube-api-access-zr5km\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.759327 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-catalog-content\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.759511 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-utilities\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.759562 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr5km\" (UniqueName: \"kubernetes.io/projected/45ec6f03-36a9-4179-aee2-d3ba73569525-kube-api-access-zr5km\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.760483 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-catalog-content\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.760766 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-utilities\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.801009 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr5km\" (UniqueName: \"kubernetes.io/projected/45ec6f03-36a9-4179-aee2-d3ba73569525-kube-api-access-zr5km\") pod \"certified-operators-8qqfp\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:14 crc kubenswrapper[4676]: I0124 01:04:14.889430 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:15 crc kubenswrapper[4676]: I0124 01:04:15.648050 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qqfp"] Jan 24 01:04:16 crc kubenswrapper[4676]: I0124 01:04:16.182934 4676 generic.go:334] "Generic (PLEG): container finished" podID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerID="836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721" exitCode=0 Jan 24 01:04:16 crc kubenswrapper[4676]: I0124 01:04:16.183019 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qqfp" event={"ID":"45ec6f03-36a9-4179-aee2-d3ba73569525","Type":"ContainerDied","Data":"836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721"} Jan 24 01:04:16 crc kubenswrapper[4676]: I0124 01:04:16.183092 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qqfp" event={"ID":"45ec6f03-36a9-4179-aee2-d3ba73569525","Type":"ContainerStarted","Data":"b754be22bda55b2d969d9d4307b2e0808737203d0f55c6fb5a2728ac67d3f9dd"} Jan 24 01:04:16 crc kubenswrapper[4676]: I0124 01:04:16.184780 4676 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 01:04:17 crc kubenswrapper[4676]: I0124 01:04:17.193743 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qqfp" event={"ID":"45ec6f03-36a9-4179-aee2-d3ba73569525","Type":"ContainerStarted","Data":"dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36"} Jan 24 01:04:18 crc kubenswrapper[4676]: I0124 01:04:18.207555 4676 generic.go:334] "Generic (PLEG): container finished" podID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerID="dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36" exitCode=0 Jan 24 01:04:18 crc kubenswrapper[4676]: I0124 01:04:18.207907 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qqfp" event={"ID":"45ec6f03-36a9-4179-aee2-d3ba73569525","Type":"ContainerDied","Data":"dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36"} Jan 24 01:04:19 crc kubenswrapper[4676]: I0124 01:04:19.226066 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qqfp" event={"ID":"45ec6f03-36a9-4179-aee2-d3ba73569525","Type":"ContainerStarted","Data":"1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3"} Jan 24 01:04:19 crc kubenswrapper[4676]: I0124 01:04:19.262010 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8qqfp" podStartSLOduration=2.852433186 podStartE2EDuration="5.261992591s" podCreationTimestamp="2026-01-24 01:04:14 +0000 UTC" firstStartedPulling="2026-01-24 01:04:16.184532326 +0000 UTC m=+3640.214503327" lastFinishedPulling="2026-01-24 01:04:18.594091691 +0000 UTC m=+3642.624062732" observedRunningTime="2026-01-24 01:04:19.255504321 +0000 UTC m=+3643.285475322" watchObservedRunningTime="2026-01-24 01:04:19.261992591 +0000 UTC m=+3643.291963592" Jan 24 01:04:24 crc kubenswrapper[4676]: I0124 01:04:24.890523 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:24 crc kubenswrapper[4676]: I0124 01:04:24.891183 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:25 crc kubenswrapper[4676]: I0124 01:04:25.019122 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:25 crc kubenswrapper[4676]: I0124 01:04:25.326887 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:25 crc kubenswrapper[4676]: I0124 01:04:25.387069 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qqfp"] Jan 24 01:04:27 crc kubenswrapper[4676]: I0124 01:04:27.313233 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8qqfp" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerName="registry-server" containerID="cri-o://1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3" gracePeriod=2 Jan 24 01:04:27 crc kubenswrapper[4676]: I0124 01:04:27.946248 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.062749 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-utilities\") pod \"45ec6f03-36a9-4179-aee2-d3ba73569525\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.062887 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-catalog-content\") pod \"45ec6f03-36a9-4179-aee2-d3ba73569525\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.062928 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr5km\" (UniqueName: \"kubernetes.io/projected/45ec6f03-36a9-4179-aee2-d3ba73569525-kube-api-access-zr5km\") pod \"45ec6f03-36a9-4179-aee2-d3ba73569525\" (UID: \"45ec6f03-36a9-4179-aee2-d3ba73569525\") " Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.064041 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-utilities" (OuterVolumeSpecName: "utilities") pod "45ec6f03-36a9-4179-aee2-d3ba73569525" (UID: "45ec6f03-36a9-4179-aee2-d3ba73569525"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.071648 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ec6f03-36a9-4179-aee2-d3ba73569525-kube-api-access-zr5km" (OuterVolumeSpecName: "kube-api-access-zr5km") pod "45ec6f03-36a9-4179-aee2-d3ba73569525" (UID: "45ec6f03-36a9-4179-aee2-d3ba73569525"). InnerVolumeSpecName "kube-api-access-zr5km". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.111522 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45ec6f03-36a9-4179-aee2-d3ba73569525" (UID: "45ec6f03-36a9-4179-aee2-d3ba73569525"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.165305 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.165341 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec6f03-36a9-4179-aee2-d3ba73569525-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.165354 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr5km\" (UniqueName: \"kubernetes.io/projected/45ec6f03-36a9-4179-aee2-d3ba73569525-kube-api-access-zr5km\") on node \"crc\" DevicePath \"\"" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.324029 4676 generic.go:334] "Generic (PLEG): container finished" podID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerID="1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3" exitCode=0 Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.324065 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qqfp" event={"ID":"45ec6f03-36a9-4179-aee2-d3ba73569525","Type":"ContainerDied","Data":"1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3"} Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.324116 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qqfp" event={"ID":"45ec6f03-36a9-4179-aee2-d3ba73569525","Type":"ContainerDied","Data":"b754be22bda55b2d969d9d4307b2e0808737203d0f55c6fb5a2728ac67d3f9dd"} Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.324139 4676 scope.go:117] "RemoveContainer" containerID="1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.324156 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qqfp" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.347141 4676 scope.go:117] "RemoveContainer" containerID="dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.355944 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qqfp"] Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.379103 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8qqfp"] Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.388244 4676 scope.go:117] "RemoveContainer" containerID="836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.444027 4676 scope.go:117] "RemoveContainer" containerID="1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3" Jan 24 01:04:28 crc kubenswrapper[4676]: E0124 01:04:28.444608 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3\": container with ID starting with 1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3 not found: ID does not exist" containerID="1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.445366 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3"} err="failed to get container status \"1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3\": rpc error: code = NotFound desc = could not find container \"1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3\": container with ID starting with 1c023458df9b94edf9a49905a4b340c8073ab2d77f915f786ec88a9ad9a156e3 not found: ID does not exist" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.445678 4676 scope.go:117] "RemoveContainer" containerID="dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36" Jan 24 01:04:28 crc kubenswrapper[4676]: E0124 01:04:28.446685 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36\": container with ID starting with dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36 not found: ID does not exist" containerID="dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.446803 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36"} err="failed to get container status \"dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36\": rpc error: code = NotFound desc = could not find container \"dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36\": container with ID starting with dfd9d9856f7a8afa17e0f5cfe7c12b474543a7274fba591c7fb28969982c3c36 not found: ID does not exist" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.446909 4676 scope.go:117] "RemoveContainer" containerID="836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721" Jan 24 01:04:28 crc kubenswrapper[4676]: E0124 01:04:28.447324 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721\": container with ID starting with 836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721 not found: ID does not exist" containerID="836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721" Jan 24 01:04:28 crc kubenswrapper[4676]: I0124 01:04:28.447359 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721"} err="failed to get container status \"836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721\": rpc error: code = NotFound desc = could not find container \"836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721\": container with ID starting with 836cd904b9af0c0d701a50e6dc3fae19ab1fc0c795615cfe51e9b14e16cea721 not found: ID does not exist" Jan 24 01:04:30 crc kubenswrapper[4676]: I0124 01:04:30.267420 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" path="/var/lib/kubelet/pods/45ec6f03-36a9-4179-aee2-d3ba73569525/volumes" Jan 24 01:04:39 crc kubenswrapper[4676]: I0124 01:04:39.364453 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:04:39 crc kubenswrapper[4676]: I0124 01:04:39.365055 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:04:39 crc kubenswrapper[4676]: I0124 01:04:39.365120 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 01:04:39 crc kubenswrapper[4676]: I0124 01:04:39.366262 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 01:04:39 crc kubenswrapper[4676]: I0124 01:04:39.366372 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" gracePeriod=600 Jan 24 01:04:39 crc kubenswrapper[4676]: E0124 01:04:39.498037 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:04:40 crc kubenswrapper[4676]: I0124 01:04:40.444924 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" exitCode=0 Jan 24 01:04:40 crc kubenswrapper[4676]: I0124 01:04:40.445016 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9"} Jan 24 01:04:40 crc kubenswrapper[4676]: I0124 01:04:40.445248 4676 scope.go:117] "RemoveContainer" containerID="e910204e57fdf9a2ba78d40b8e9fd8506cf9f7184737c120d4af2680b61668e0" Jan 24 01:04:40 crc kubenswrapper[4676]: I0124 01:04:40.445943 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:04:40 crc kubenswrapper[4676]: E0124 01:04:40.446310 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.084013 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pgb89/must-gather-rzdhh"] Jan 24 01:04:41 crc kubenswrapper[4676]: E0124 01:04:41.084438 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerName="extract-utilities" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.084452 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerName="extract-utilities" Jan 24 01:04:41 crc kubenswrapper[4676]: E0124 01:04:41.084466 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerName="registry-server" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.084473 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerName="registry-server" Jan 24 01:04:41 crc kubenswrapper[4676]: E0124 01:04:41.084485 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerName="extract-content" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.084492 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerName="extract-content" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.084672 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ec6f03-36a9-4179-aee2-d3ba73569525" containerName="registry-server" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.085705 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.089344 4676 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pgb89"/"default-dockercfg-lvdrx" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.089511 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pgb89"/"kube-root-ca.crt" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.089642 4676 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pgb89"/"openshift-service-ca.crt" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.093843 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pgb89/must-gather-rzdhh"] Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.184565 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ce7f9d18-7eec-43f4-908a-d65d210492e7-must-gather-output\") pod \"must-gather-rzdhh\" (UID: \"ce7f9d18-7eec-43f4-908a-d65d210492e7\") " pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.184723 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kcb\" (UniqueName: \"kubernetes.io/projected/ce7f9d18-7eec-43f4-908a-d65d210492e7-kube-api-access-z2kcb\") pod \"must-gather-rzdhh\" (UID: \"ce7f9d18-7eec-43f4-908a-d65d210492e7\") " pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.286616 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kcb\" (UniqueName: \"kubernetes.io/projected/ce7f9d18-7eec-43f4-908a-d65d210492e7-kube-api-access-z2kcb\") pod \"must-gather-rzdhh\" (UID: \"ce7f9d18-7eec-43f4-908a-d65d210492e7\") " pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.286704 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ce7f9d18-7eec-43f4-908a-d65d210492e7-must-gather-output\") pod \"must-gather-rzdhh\" (UID: \"ce7f9d18-7eec-43f4-908a-d65d210492e7\") " pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.287142 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ce7f9d18-7eec-43f4-908a-d65d210492e7-must-gather-output\") pod \"must-gather-rzdhh\" (UID: \"ce7f9d18-7eec-43f4-908a-d65d210492e7\") " pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.310022 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kcb\" (UniqueName: \"kubernetes.io/projected/ce7f9d18-7eec-43f4-908a-d65d210492e7-kube-api-access-z2kcb\") pod \"must-gather-rzdhh\" (UID: \"ce7f9d18-7eec-43f4-908a-d65d210492e7\") " pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.452424 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:04:41 crc kubenswrapper[4676]: I0124 01:04:41.918679 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pgb89/must-gather-rzdhh"] Jan 24 01:04:42 crc kubenswrapper[4676]: I0124 01:04:42.482582 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/must-gather-rzdhh" event={"ID":"ce7f9d18-7eec-43f4-908a-d65d210492e7","Type":"ContainerStarted","Data":"ebdf94432be40adc9409ec2d7d078306d66cbfc22f6dc35636a6fecb28c07f9e"} Jan 24 01:04:42 crc kubenswrapper[4676]: I0124 01:04:42.482858 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/must-gather-rzdhh" event={"ID":"ce7f9d18-7eec-43f4-908a-d65d210492e7","Type":"ContainerStarted","Data":"d0b712dc713fcee1a9dcfc5340b4f493d3bd6dca5b50bbf686427d3022f3f332"} Jan 24 01:04:42 crc kubenswrapper[4676]: I0124 01:04:42.482872 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/must-gather-rzdhh" event={"ID":"ce7f9d18-7eec-43f4-908a-d65d210492e7","Type":"ContainerStarted","Data":"16c437b62ad21fcf66bdc37ab44a5e538e268edbe29d7c40000ad0f924ce60b1"} Jan 24 01:04:42 crc kubenswrapper[4676]: I0124 01:04:42.499737 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pgb89/must-gather-rzdhh" podStartSLOduration=1.499719873 podStartE2EDuration="1.499719873s" podCreationTimestamp="2026-01-24 01:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:04:42.4983185 +0000 UTC m=+3666.528289521" watchObservedRunningTime="2026-01-24 01:04:42.499719873 +0000 UTC m=+3666.529690874" Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.034027 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pgb89/crc-debug-5mpdp"] Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.036134 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.090156 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-host\") pod \"crc-debug-5mpdp\" (UID: \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\") " pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.090503 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tl8t\" (UniqueName: \"kubernetes.io/projected/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-kube-api-access-7tl8t\") pod \"crc-debug-5mpdp\" (UID: \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\") " pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.192704 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-host\") pod \"crc-debug-5mpdp\" (UID: \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\") " pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.192763 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tl8t\" (UniqueName: \"kubernetes.io/projected/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-kube-api-access-7tl8t\") pod \"crc-debug-5mpdp\" (UID: \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\") " pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.193059 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-host\") pod \"crc-debug-5mpdp\" (UID: \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\") " pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.228098 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tl8t\" (UniqueName: \"kubernetes.io/projected/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-kube-api-access-7tl8t\") pod \"crc-debug-5mpdp\" (UID: \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\") " pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.365409 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:46 crc kubenswrapper[4676]: W0124 01:04:46.421136 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2d41c9a_a05f_46c4_b548_fb6bfc0dee4a.slice/crio-f06001573ddb33783b3221d0ddb7f1268c8d975b7736cec8fc9e1555d436bf3d WatchSource:0}: Error finding container f06001573ddb33783b3221d0ddb7f1268c8d975b7736cec8fc9e1555d436bf3d: Status 404 returned error can't find the container with id f06001573ddb33783b3221d0ddb7f1268c8d975b7736cec8fc9e1555d436bf3d Jan 24 01:04:46 crc kubenswrapper[4676]: I0124 01:04:46.523289 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/crc-debug-5mpdp" event={"ID":"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a","Type":"ContainerStarted","Data":"f06001573ddb33783b3221d0ddb7f1268c8d975b7736cec8fc9e1555d436bf3d"} Jan 24 01:04:47 crc kubenswrapper[4676]: I0124 01:04:47.533055 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/crc-debug-5mpdp" event={"ID":"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a","Type":"ContainerStarted","Data":"6e1d712a5b60b5fed2dd683d40f7c0b75205c48776013f310795ca214bdf4a77"} Jan 24 01:04:47 crc kubenswrapper[4676]: I0124 01:04:47.548721 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pgb89/crc-debug-5mpdp" podStartSLOduration=1.5487052810000002 podStartE2EDuration="1.548705281s" podCreationTimestamp="2026-01-24 01:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:04:47.544820271 +0000 UTC m=+3671.574791272" watchObservedRunningTime="2026-01-24 01:04:47.548705281 +0000 UTC m=+3671.578676282" Jan 24 01:04:54 crc kubenswrapper[4676]: I0124 01:04:54.255555 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:04:54 crc kubenswrapper[4676]: E0124 01:04:54.256351 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:04:57 crc kubenswrapper[4676]: I0124 01:04:57.617669 4676 generic.go:334] "Generic (PLEG): container finished" podID="c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a" containerID="6e1d712a5b60b5fed2dd683d40f7c0b75205c48776013f310795ca214bdf4a77" exitCode=0 Jan 24 01:04:57 crc kubenswrapper[4676]: I0124 01:04:57.617753 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/crc-debug-5mpdp" event={"ID":"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a","Type":"ContainerDied","Data":"6e1d712a5b60b5fed2dd683d40f7c0b75205c48776013f310795ca214bdf4a77"} Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.762718 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.814708 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pgb89/crc-debug-5mpdp"] Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.821627 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pgb89/crc-debug-5mpdp"] Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.824492 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tl8t\" (UniqueName: \"kubernetes.io/projected/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-kube-api-access-7tl8t\") pod \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\" (UID: \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\") " Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.824603 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-host\") pod \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\" (UID: \"c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a\") " Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.825021 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-host" (OuterVolumeSpecName: "host") pod "c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a" (UID: "c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.834608 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-kube-api-access-7tl8t" (OuterVolumeSpecName: "kube-api-access-7tl8t") pod "c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a" (UID: "c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a"). InnerVolumeSpecName "kube-api-access-7tl8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.926737 4676 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-host\") on node \"crc\" DevicePath \"\"" Jan 24 01:04:58 crc kubenswrapper[4676]: I0124 01:04:58.927029 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tl8t\" (UniqueName: \"kubernetes.io/projected/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a-kube-api-access-7tl8t\") on node \"crc\" DevicePath \"\"" Jan 24 01:04:59 crc kubenswrapper[4676]: I0124 01:04:59.635820 4676 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f06001573ddb33783b3221d0ddb7f1268c8d975b7736cec8fc9e1555d436bf3d" Jan 24 01:04:59 crc kubenswrapper[4676]: I0124 01:04:59.636178 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/crc-debug-5mpdp" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.102068 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pgb89/crc-debug-vxdn7"] Jan 24 01:05:00 crc kubenswrapper[4676]: E0124 01:05:00.103462 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a" containerName="container-00" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.103558 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a" containerName="container-00" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.103803 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a" containerName="container-00" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.104475 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.150157 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-host\") pod \"crc-debug-vxdn7\" (UID: \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\") " pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.150236 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqnvr\" (UniqueName: \"kubernetes.io/projected/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-kube-api-access-xqnvr\") pod \"crc-debug-vxdn7\" (UID: \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\") " pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.252813 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-host\") pod \"crc-debug-vxdn7\" (UID: \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\") " pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.252867 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqnvr\" (UniqueName: \"kubernetes.io/projected/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-kube-api-access-xqnvr\") pod \"crc-debug-vxdn7\" (UID: \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\") " pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.252974 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-host\") pod \"crc-debug-vxdn7\" (UID: \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\") " pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.266677 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a" path="/var/lib/kubelet/pods/c2d41c9a-a05f-46c4-b548-fb6bfc0dee4a/volumes" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.282844 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqnvr\" (UniqueName: \"kubernetes.io/projected/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-kube-api-access-xqnvr\") pod \"crc-debug-vxdn7\" (UID: \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\") " pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.430552 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:00 crc kubenswrapper[4676]: W0124 01:05:00.481624 4676 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd98ed69b_2cc8_4c59_a562_d1cafcfcfcf8.slice/crio-7332cafa0ac25e2d33eeed9dc137e087b955765c89b55c9d3fff5e9dc7f907c0 WatchSource:0}: Error finding container 7332cafa0ac25e2d33eeed9dc137e087b955765c89b55c9d3fff5e9dc7f907c0: Status 404 returned error can't find the container with id 7332cafa0ac25e2d33eeed9dc137e087b955765c89b55c9d3fff5e9dc7f907c0 Jan 24 01:05:00 crc kubenswrapper[4676]: I0124 01:05:00.645058 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/crc-debug-vxdn7" event={"ID":"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8","Type":"ContainerStarted","Data":"7332cafa0ac25e2d33eeed9dc137e087b955765c89b55c9d3fff5e9dc7f907c0"} Jan 24 01:05:01 crc kubenswrapper[4676]: I0124 01:05:01.655213 4676 generic.go:334] "Generic (PLEG): container finished" podID="d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8" containerID="a96dc5b6f6afa933b2ccb20288d52f9f02e4f6788c82a166648f646efd76a1ef" exitCode=1 Jan 24 01:05:01 crc kubenswrapper[4676]: I0124 01:05:01.655430 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/crc-debug-vxdn7" event={"ID":"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8","Type":"ContainerDied","Data":"a96dc5b6f6afa933b2ccb20288d52f9f02e4f6788c82a166648f646efd76a1ef"} Jan 24 01:05:01 crc kubenswrapper[4676]: I0124 01:05:01.698959 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pgb89/crc-debug-vxdn7"] Jan 24 01:05:01 crc kubenswrapper[4676]: I0124 01:05:01.706293 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pgb89/crc-debug-vxdn7"] Jan 24 01:05:02 crc kubenswrapper[4676]: I0124 01:05:02.752615 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:02 crc kubenswrapper[4676]: I0124 01:05:02.796568 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-host\") pod \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\" (UID: \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\") " Jan 24 01:05:02 crc kubenswrapper[4676]: I0124 01:05:02.796846 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqnvr\" (UniqueName: \"kubernetes.io/projected/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-kube-api-access-xqnvr\") pod \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\" (UID: \"d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8\") " Jan 24 01:05:02 crc kubenswrapper[4676]: I0124 01:05:02.796695 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-host" (OuterVolumeSpecName: "host") pod "d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8" (UID: "d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 01:05:02 crc kubenswrapper[4676]: I0124 01:05:02.814078 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-kube-api-access-xqnvr" (OuterVolumeSpecName: "kube-api-access-xqnvr") pod "d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8" (UID: "d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8"). InnerVolumeSpecName "kube-api-access-xqnvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:05:02 crc kubenswrapper[4676]: I0124 01:05:02.898929 4676 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-host\") on node \"crc\" DevicePath \"\"" Jan 24 01:05:02 crc kubenswrapper[4676]: I0124 01:05:02.898956 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqnvr\" (UniqueName: \"kubernetes.io/projected/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8-kube-api-access-xqnvr\") on node \"crc\" DevicePath \"\"" Jan 24 01:05:03 crc kubenswrapper[4676]: I0124 01:05:03.673087 4676 scope.go:117] "RemoveContainer" containerID="a96dc5b6f6afa933b2ccb20288d52f9f02e4f6788c82a166648f646efd76a1ef" Jan 24 01:05:03 crc kubenswrapper[4676]: I0124 01:05:03.673166 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/crc-debug-vxdn7" Jan 24 01:05:04 crc kubenswrapper[4676]: I0124 01:05:04.279038 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8" path="/var/lib/kubelet/pods/d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8/volumes" Jan 24 01:05:09 crc kubenswrapper[4676]: I0124 01:05:09.256228 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:05:09 crc kubenswrapper[4676]: E0124 01:05:09.257050 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:05:21 crc kubenswrapper[4676]: I0124 01:05:21.255698 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:05:21 crc kubenswrapper[4676]: E0124 01:05:21.256991 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:05:36 crc kubenswrapper[4676]: I0124 01:05:36.277225 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:05:36 crc kubenswrapper[4676]: E0124 01:05:36.279114 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:05:44 crc kubenswrapper[4676]: I0124 01:05:44.418541 4676 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-f47776b4c-v4xb2" podUID="b976b9e2-b80e-4626-919d-3bb84f0151e8" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 24 01:05:47 crc kubenswrapper[4676]: I0124 01:05:47.255444 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:05:47 crc kubenswrapper[4676]: E0124 01:05:47.256075 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:06:00 crc kubenswrapper[4676]: I0124 01:06:00.256593 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:06:00 crc kubenswrapper[4676]: E0124 01:06:00.257366 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:06:00 crc kubenswrapper[4676]: I0124 01:06:00.565077 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54b7b855f4-s49zw_c015646b-9361-4b5c-b465-a66d7fc5cc53/barbican-api/0.log" Jan 24 01:06:00 crc kubenswrapper[4676]: I0124 01:06:00.737823 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54b7b855f4-s49zw_c015646b-9361-4b5c-b465-a66d7fc5cc53/barbican-api-log/0.log" Jan 24 01:06:00 crc kubenswrapper[4676]: I0124 01:06:00.839691 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-79ccd69c74-nj8k6_a05052ce-062e-423c-80cf-78349e42718f/barbican-keystone-listener/0.log" Jan 24 01:06:00 crc kubenswrapper[4676]: I0124 01:06:00.890137 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-79ccd69c74-nj8k6_a05052ce-062e-423c-80cf-78349e42718f/barbican-keystone-listener-log/0.log" Jan 24 01:06:01 crc kubenswrapper[4676]: I0124 01:06:01.007825 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68bd7fb46c-sflbz_009f35e0-3a98-453b-b92e-8db9e5c92798/barbican-worker/0.log" Jan 24 01:06:01 crc kubenswrapper[4676]: I0124 01:06:01.075288 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68bd7fb46c-sflbz_009f35e0-3a98-453b-b92e-8db9e5c92798/barbican-worker-log/0.log" Jan 24 01:06:01 crc kubenswrapper[4676]: I0124 01:06:01.194425 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9z5hp_1fd4e1f4-0772-493a-b929-6e93470f9abf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:01 crc kubenswrapper[4676]: I0124 01:06:01.284841 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58bbd77a-9518-4037-b96b-a1490082fb04/ceilometer-central-agent/0.log" Jan 24 01:06:01 crc kubenswrapper[4676]: I0124 01:06:01.869838 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58bbd77a-9518-4037-b96b-a1490082fb04/sg-core/0.log" Jan 24 01:06:01 crc kubenswrapper[4676]: I0124 01:06:01.894012 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58bbd77a-9518-4037-b96b-a1490082fb04/ceilometer-notification-agent/0.log" Jan 24 01:06:01 crc kubenswrapper[4676]: I0124 01:06:01.901158 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58bbd77a-9518-4037-b96b-a1490082fb04/proxy-httpd/0.log" Jan 24 01:06:02 crc kubenswrapper[4676]: I0124 01:06:02.099992 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d38cef52-387b-4633-be75-6dc455ad53c4/cinder-api/0.log" Jan 24 01:06:02 crc kubenswrapper[4676]: I0124 01:06:02.130246 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d38cef52-387b-4633-be75-6dc455ad53c4/cinder-api-log/0.log" Jan 24 01:06:02 crc kubenswrapper[4676]: I0124 01:06:02.272725 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978/cinder-scheduler/0.log" Jan 24 01:06:02 crc kubenswrapper[4676]: I0124 01:06:02.311517 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_629a8ba3-3e4e-4fc2-86a4-bb07d2fb4978/probe/0.log" Jan 24 01:06:02 crc kubenswrapper[4676]: I0124 01:06:02.418733 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wwcsg_7f552002-93ef-485f-9227-a94733534466/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:02 crc kubenswrapper[4676]: I0124 01:06:02.576877 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fb7f7_f6bc5ee4-f730-4e1e-9684-b643daed2519/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:03 crc kubenswrapper[4676]: I0124 01:06:03.301482 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b865b64bc-drclt_cdf394f7-5d67-4a0f-9644-82fe83a72e2d/init/0.log" Jan 24 01:06:03 crc kubenswrapper[4676]: I0124 01:06:03.446251 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b865b64bc-drclt_cdf394f7-5d67-4a0f-9644-82fe83a72e2d/init/0.log" Jan 24 01:06:03 crc kubenswrapper[4676]: I0124 01:06:03.480560 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b865b64bc-drclt_cdf394f7-5d67-4a0f-9644-82fe83a72e2d/dnsmasq-dns/0.log" Jan 24 01:06:03 crc kubenswrapper[4676]: I0124 01:06:03.547695 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-skh8k_7e20d58a-5f01-4c23-9ab0-650d3ae76844/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:03 crc kubenswrapper[4676]: I0124 01:06:03.722895 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a6cef518-8385-4315-83d5-f46a6144d5a0/glance-log/0.log" Jan 24 01:06:03 crc kubenswrapper[4676]: I0124 01:06:03.778455 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a6cef518-8385-4315-83d5-f46a6144d5a0/glance-httpd/0.log" Jan 24 01:06:03 crc kubenswrapper[4676]: I0124 01:06:03.889945 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_042ec835-b8c1-43be-a19d-d70f76128e26/glance-httpd/0.log" Jan 24 01:06:03 crc kubenswrapper[4676]: I0124 01:06:03.925991 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_042ec835-b8c1-43be-a19d-d70f76128e26/glance-log/0.log" Jan 24 01:06:04 crc kubenswrapper[4676]: I0124 01:06:04.041703 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f876ddf46-fs7qv_ac7dce6b-3bd9-4ad9-9485-83d9384b8bad/horizon/1.log" Jan 24 01:06:04 crc kubenswrapper[4676]: I0124 01:06:04.135119 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f876ddf46-fs7qv_ac7dce6b-3bd9-4ad9-9485-83d9384b8bad/horizon/0.log" Jan 24 01:06:04 crc kubenswrapper[4676]: I0124 01:06:04.376688 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-t4nrs_e9d2a92f-e22e-44c5-86c5-8b38824e3d4c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:04 crc kubenswrapper[4676]: I0124 01:06:04.423012 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f876ddf46-fs7qv_ac7dce6b-3bd9-4ad9-9485-83d9384b8bad/horizon-log/0.log" Jan 24 01:06:04 crc kubenswrapper[4676]: I0124 01:06:04.592075 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-sz24c_15fbef8d-e606-4b72-a994-e71d03e8fec8/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:04 crc kubenswrapper[4676]: I0124 01:06:04.704999 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-849b597d57-kw79c_29bf9e4b-4fb3-41f4-9280-f7ea2e61a844/keystone-api/0.log" Jan 24 01:06:04 crc kubenswrapper[4676]: I0124 01:06:04.764370 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486941-6wtvm_483961de-208d-4593-a6c8-ecee687b7c06/keystone-cron/0.log" Jan 24 01:06:05 crc kubenswrapper[4676]: I0124 01:06:05.036612 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_25ee749c-0b84-4abd-9fe0-a6f23c0c912d/kube-state-metrics/0.log" Jan 24 01:06:05 crc kubenswrapper[4676]: I0124 01:06:05.143582 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ddk99_2cafe497-da96-4b39-bec2-1ec54f859303/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:05 crc kubenswrapper[4676]: I0124 01:06:05.235492 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_545f6045-cf2f-4d4b-91d8-227148ddd71a/memcached/0.log" Jan 24 01:06:05 crc kubenswrapper[4676]: I0124 01:06:05.386676 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bd579cfd9-q4npp_6dda8455-6777-4efc-abf3-df547cf58339/neutron-api/0.log" Jan 24 01:06:05 crc kubenswrapper[4676]: I0124 01:06:05.569328 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bd579cfd9-q4npp_6dda8455-6777-4efc-abf3-df547cf58339/neutron-httpd/0.log" Jan 24 01:06:05 crc kubenswrapper[4676]: I0124 01:06:05.691128 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-92xr7_cba640eb-c65f-46be-af5d-5126418c361a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:05 crc kubenswrapper[4676]: I0124 01:06:05.987709 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ff542eb3-0142-4d46-a6e6-2e89c73f5824/nova-api-api/0.log" Jan 24 01:06:06 crc kubenswrapper[4676]: I0124 01:06:06.100969 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ff542eb3-0142-4d46-a6e6-2e89c73f5824/nova-api-log/0.log" Jan 24 01:06:06 crc kubenswrapper[4676]: I0124 01:06:06.124136 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_400ba963-913a-401c-8f2e-21005977e0c2/nova-cell0-conductor-conductor/0.log" Jan 24 01:06:06 crc kubenswrapper[4676]: I0124 01:06:06.359458 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4420d197-3908-406b-9661-67d64ccd7768/nova-cell1-conductor-conductor/0.log" Jan 24 01:06:06 crc kubenswrapper[4676]: I0124 01:06:06.460678 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b1499ed8-1041-4980-b9da-cb957cbf215c/nova-cell1-novncproxy-novncproxy/0.log" Jan 24 01:06:06 crc kubenswrapper[4676]: I0124 01:06:06.924735 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5cl2g_75cae94d-b818-4f92-b42d-fd8cec63a657/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.081636 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f1085d48-78f1-4437-a518-239ba90b7c0b/nova-metadata-log/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.391480 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_19365292-50d8-4e94-952f-2df7ee20f0ba/mysql-bootstrap/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.465305 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8ff6f87b-a571-43da-9fbc-9203f0001771/nova-scheduler-scheduler/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.614735 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_19365292-50d8-4e94-952f-2df7ee20f0ba/mysql-bootstrap/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.643341 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f1085d48-78f1-4437-a518-239ba90b7c0b/nova-metadata-metadata/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.661304 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_19365292-50d8-4e94-952f-2df7ee20f0ba/galera/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.704751 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2bbaae64-ac2d-43c6-8984-5483f2eb4211/mysql-bootstrap/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.863885 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2bbaae64-ac2d-43c6-8984-5483f2eb4211/mysql-bootstrap/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.901559 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2bbaae64-ac2d-43c6-8984-5483f2eb4211/galera/0.log" Jan 24 01:06:07 crc kubenswrapper[4676]: I0124 01:06:07.901600 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f3fa6e59-785d-4d40-8d73-170552068e43/openstackclient/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.109313 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jsd4n_7adbcf83-efbd-4e8d-97e5-f8768463284a/ovn-controller/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.151362 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mq6ns_21550436-cd71-46d8-838e-c51c19ddf8ff/openstack-network-exporter/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.239040 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sl9q_427fbd2d-16ef-44a6-a71d-8172f56b863d/ovsdb-server-init/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.440715 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sl9q_427fbd2d-16ef-44a6-a71d-8172f56b863d/ovsdb-server-init/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.445175 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sl9q_427fbd2d-16ef-44a6-a71d-8172f56b863d/ovsdb-server/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.505126 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-527bv_55444bfa-a024-4606-aa57-6456c6688e52/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.514033 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sl9q_427fbd2d-16ef-44a6-a71d-8172f56b863d/ovs-vswitchd/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.649091 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1486ea92-d267-49cd-8516-d474ef25c2df/openstack-network-exporter/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.709719 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1486ea92-d267-49cd-8516-d474ef25c2df/ovn-northd/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.824793 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3bff778a-b10f-4ba9-a12f-f4086608fd30/openstack-network-exporter/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.875749 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3bff778a-b10f-4ba9-a12f-f4086608fd30/ovsdbserver-nb/0.log" Jan 24 01:06:08 crc kubenswrapper[4676]: I0124 01:06:08.944717 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3648c65-9fcb-4a9e-b4cb-d8437dc00141/openstack-network-exporter/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.134321 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3648c65-9fcb-4a9e-b4cb-d8437dc00141/ovsdbserver-sb/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.136583 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-765f6cdf58-5q9v9_21e63383-223a-4247-8589-03ab5a33f980/placement-api/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.198061 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-765f6cdf58-5q9v9_21e63383-223a-4247-8589-03ab5a33f980/placement-log/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.295019 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2df1d42-fa93-4771-ba77-1c27f820b298/setup-container/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.435868 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2df1d42-fa93-4771-ba77-1c27f820b298/rabbitmq/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.470219 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2df1d42-fa93-4771-ba77-1c27f820b298/setup-container/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.492696 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c162e478-58e3-4a83-97cb-29887613c1aa/setup-container/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.681845 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c162e478-58e3-4a83-97cb-29887613c1aa/setup-container/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.721786 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c162e478-58e3-4a83-97cb-29887613c1aa/rabbitmq/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.726768 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-448ht_447c1e1f-d798-4bcc-a8ef-91d4ad5426a5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.918030 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qx6r8_19b712ec-28a7-419f-9f09-7d0b0ecbf747/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:09 crc kubenswrapper[4676]: I0124 01:06:09.970006 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fwvvd_15cfbdc6-1f3b-49a5-8822-c4af1e686731/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.086467 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mhrvn_2fc6ead9-bdd0-49ef-9da8-96ccf67f6ec1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.173271 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8b9rz_d1ed73dc-4392-4d20-a592-4a8c5ba9c104/ssh-known-hosts-edpm-deployment/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.406718 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f47776b4c-v4xb2_b976b9e2-b80e-4626-919d-3bb84f0151e8/proxy-httpd/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.465803 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f47776b4c-v4xb2_b976b9e2-b80e-4626-919d-3bb84f0151e8/proxy-server/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.553095 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5fmzb_4ff61f48-e451-47e8-adcc-0870b29d28a9/swift-ring-rebalance/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.671757 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/account-reaper/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.693775 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/account-auditor/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.738665 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/account-replicator/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.785595 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/account-server/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.816353 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/container-auditor/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.916597 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/container-replicator/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.926936 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/container-server/0.log" Jan 24 01:06:10 crc kubenswrapper[4676]: I0124 01:06:10.963707 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/container-updater/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.007545 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-auditor/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.017080 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-expirer/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.114800 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-server/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.147834 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-replicator/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.163843 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/object-updater/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.226815 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/rsync/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.228166 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4620e725-9218-461b-a56d-104bcb7f1df4/swift-recon-cron/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.419784 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3e2adf44-9053-4dcd-9d47-27910710dbc8/tempest-tests-tempest-tests-runner/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.458017 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j8gtv_7ef31551-e4ed-48d0-a4d6-f9c2fb515966/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.559412 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e1574f42-e89a-40d4-b6da-2d4ef0824916/test-operator-logs-container/0.log" Jan 24 01:06:11 crc kubenswrapper[4676]: I0124 01:06:11.686047 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-hx5pd_83640815-cc06-4abe-a06f-20a1f8798609/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 01:06:13 crc kubenswrapper[4676]: I0124 01:06:13.256011 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:06:13 crc kubenswrapper[4676]: E0124 01:06:13.256647 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.267792 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7cbcm"] Jan 24 01:06:20 crc kubenswrapper[4676]: E0124 01:06:20.268750 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8" containerName="container-00" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.268767 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8" containerName="container-00" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.268993 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98ed69b-2cc8-4c59-a562-d1cafcfcfcf8" containerName="container-00" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.270387 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.285611 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cbcm"] Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.375387 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-catalog-content\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.375556 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxvg\" (UniqueName: \"kubernetes.io/projected/218ff773-95bb-4b63-a4d5-b7fb752b5870-kube-api-access-7kxvg\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.375590 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-utilities\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.477305 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxvg\" (UniqueName: \"kubernetes.io/projected/218ff773-95bb-4b63-a4d5-b7fb752b5870-kube-api-access-7kxvg\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.477356 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-utilities\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.477450 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-catalog-content\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.477842 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-catalog-content\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.478305 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-utilities\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.497295 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxvg\" (UniqueName: \"kubernetes.io/projected/218ff773-95bb-4b63-a4d5-b7fb752b5870-kube-api-access-7kxvg\") pod \"community-operators-7cbcm\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:20 crc kubenswrapper[4676]: I0124 01:06:20.627114 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:21 crc kubenswrapper[4676]: I0124 01:06:21.183334 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cbcm"] Jan 24 01:06:21 crc kubenswrapper[4676]: I0124 01:06:21.539568 4676 generic.go:334] "Generic (PLEG): container finished" podID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerID="d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06" exitCode=0 Jan 24 01:06:21 crc kubenswrapper[4676]: I0124 01:06:21.539670 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cbcm" event={"ID":"218ff773-95bb-4b63-a4d5-b7fb752b5870","Type":"ContainerDied","Data":"d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06"} Jan 24 01:06:21 crc kubenswrapper[4676]: I0124 01:06:21.539808 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cbcm" event={"ID":"218ff773-95bb-4b63-a4d5-b7fb752b5870","Type":"ContainerStarted","Data":"76f1d671e1f62de1005f97f3b82a4e50a1c9871c76ad758d00aea03a6ba36366"} Jan 24 01:06:23 crc kubenswrapper[4676]: I0124 01:06:23.556761 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cbcm" event={"ID":"218ff773-95bb-4b63-a4d5-b7fb752b5870","Type":"ContainerStarted","Data":"c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50"} Jan 24 01:06:24 crc kubenswrapper[4676]: I0124 01:06:24.566705 4676 generic.go:334] "Generic (PLEG): container finished" podID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerID="c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50" exitCode=0 Jan 24 01:06:24 crc kubenswrapper[4676]: I0124 01:06:24.566800 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cbcm" event={"ID":"218ff773-95bb-4b63-a4d5-b7fb752b5870","Type":"ContainerDied","Data":"c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50"} Jan 24 01:06:24 crc kubenswrapper[4676]: I0124 01:06:24.567199 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cbcm" event={"ID":"218ff773-95bb-4b63-a4d5-b7fb752b5870","Type":"ContainerStarted","Data":"4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330"} Jan 24 01:06:24 crc kubenswrapper[4676]: I0124 01:06:24.584094 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7cbcm" podStartSLOduration=2.064078141 podStartE2EDuration="4.584077772s" podCreationTimestamp="2026-01-24 01:06:20 +0000 UTC" firstStartedPulling="2026-01-24 01:06:21.540777145 +0000 UTC m=+3765.570748146" lastFinishedPulling="2026-01-24 01:06:24.060776776 +0000 UTC m=+3768.090747777" observedRunningTime="2026-01-24 01:06:24.579784531 +0000 UTC m=+3768.609755532" watchObservedRunningTime="2026-01-24 01:06:24.584077772 +0000 UTC m=+3768.614048773" Jan 24 01:06:25 crc kubenswrapper[4676]: I0124 01:06:25.255737 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:06:25 crc kubenswrapper[4676]: E0124 01:06:25.256104 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:06:30 crc kubenswrapper[4676]: I0124 01:06:30.640525 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:30 crc kubenswrapper[4676]: I0124 01:06:30.642265 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:31 crc kubenswrapper[4676]: I0124 01:06:31.063660 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:31 crc kubenswrapper[4676]: I0124 01:06:31.699470 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:31 crc kubenswrapper[4676]: I0124 01:06:31.747140 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cbcm"] Jan 24 01:06:33 crc kubenswrapper[4676]: I0124 01:06:33.667707 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7cbcm" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerName="registry-server" containerID="cri-o://4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330" gracePeriod=2 Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.633948 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.677745 4676 generic.go:334] "Generic (PLEG): container finished" podID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerID="4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330" exitCode=0 Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.677788 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cbcm" event={"ID":"218ff773-95bb-4b63-a4d5-b7fb752b5870","Type":"ContainerDied","Data":"4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330"} Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.677818 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cbcm" event={"ID":"218ff773-95bb-4b63-a4d5-b7fb752b5870","Type":"ContainerDied","Data":"76f1d671e1f62de1005f97f3b82a4e50a1c9871c76ad758d00aea03a6ba36366"} Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.677836 4676 scope.go:117] "RemoveContainer" containerID="4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.677986 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cbcm" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.699286 4676 scope.go:117] "RemoveContainer" containerID="c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.716919 4676 scope.go:117] "RemoveContainer" containerID="d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.752686 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-catalog-content\") pod \"218ff773-95bb-4b63-a4d5-b7fb752b5870\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.752858 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-utilities\") pod \"218ff773-95bb-4b63-a4d5-b7fb752b5870\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.752902 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxvg\" (UniqueName: \"kubernetes.io/projected/218ff773-95bb-4b63-a4d5-b7fb752b5870-kube-api-access-7kxvg\") pod \"218ff773-95bb-4b63-a4d5-b7fb752b5870\" (UID: \"218ff773-95bb-4b63-a4d5-b7fb752b5870\") " Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.754751 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-utilities" (OuterVolumeSpecName: "utilities") pod "218ff773-95bb-4b63-a4d5-b7fb752b5870" (UID: "218ff773-95bb-4b63-a4d5-b7fb752b5870"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.760672 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218ff773-95bb-4b63-a4d5-b7fb752b5870-kube-api-access-7kxvg" (OuterVolumeSpecName: "kube-api-access-7kxvg") pod "218ff773-95bb-4b63-a4d5-b7fb752b5870" (UID: "218ff773-95bb-4b63-a4d5-b7fb752b5870"). InnerVolumeSpecName "kube-api-access-7kxvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.771706 4676 scope.go:117] "RemoveContainer" containerID="4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330" Jan 24 01:06:34 crc kubenswrapper[4676]: E0124 01:06:34.783840 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330\": container with ID starting with 4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330 not found: ID does not exist" containerID="4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.783875 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330"} err="failed to get container status \"4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330\": rpc error: code = NotFound desc = could not find container \"4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330\": container with ID starting with 4623a790b30d939fdb7ee79f3f467c46f1f8092f29d016a602d7699d5ad76330 not found: ID does not exist" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.783899 4676 scope.go:117] "RemoveContainer" containerID="c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50" Jan 24 01:06:34 crc kubenswrapper[4676]: E0124 01:06:34.784341 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50\": container with ID starting with c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50 not found: ID does not exist" containerID="c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.784422 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50"} err="failed to get container status \"c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50\": rpc error: code = NotFound desc = could not find container \"c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50\": container with ID starting with c086e3b7dcaf62c1a5d2423eca56792f5f7e8612c92104aeb1a3622b5b0feb50 not found: ID does not exist" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.784453 4676 scope.go:117] "RemoveContainer" containerID="d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06" Jan 24 01:06:34 crc kubenswrapper[4676]: E0124 01:06:34.785285 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06\": container with ID starting with d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06 not found: ID does not exist" containerID="d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.785313 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06"} err="failed to get container status \"d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06\": rpc error: code = NotFound desc = could not find container \"d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06\": container with ID starting with d2ceec6e32e43b9f2fd6d261e6b3cdd5b94e32ac761c00f5b4b557088654fd06 not found: ID does not exist" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.822219 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "218ff773-95bb-4b63-a4d5-b7fb752b5870" (UID: "218ff773-95bb-4b63-a4d5-b7fb752b5870"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.855320 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.855349 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kxvg\" (UniqueName: \"kubernetes.io/projected/218ff773-95bb-4b63-a4d5-b7fb752b5870-kube-api-access-7kxvg\") on node \"crc\" DevicePath \"\"" Jan 24 01:06:34 crc kubenswrapper[4676]: I0124 01:06:34.855360 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218ff773-95bb-4b63-a4d5-b7fb752b5870-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 01:06:35 crc kubenswrapper[4676]: I0124 01:06:35.025012 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cbcm"] Jan 24 01:06:35 crc kubenswrapper[4676]: I0124 01:06:35.035136 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7cbcm"] Jan 24 01:06:36 crc kubenswrapper[4676]: I0124 01:06:36.275170 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:06:36 crc kubenswrapper[4676]: I0124 01:06:36.275365 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" path="/var/lib/kubelet/pods/218ff773-95bb-4b63-a4d5-b7fb752b5870/volumes" Jan 24 01:06:36 crc kubenswrapper[4676]: E0124 01:06:36.275683 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.174916 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/util/0.log" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.291932 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/util/0.log" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.362602 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/pull/0.log" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.379300 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/pull/0.log" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.569345 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/pull/0.log" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.617559 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/util/0.log" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.632793 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ba3e01239114c6476dd7da80952015a1e39aad88ba09ffb7f3cf48a9e6vf789_9d669669-26ba-4775-b7cc-e97cc7dbe326/extract/0.log" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.842297 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-fgplq_02123851-7d2f-477b-9c60-5a9922a0bc97/manager/0.log" Jan 24 01:06:39 crc kubenswrapper[4676]: I0124 01:06:39.892853 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-c8c6m_0cd05b9f-6699-46e3-ae36-9f21352e6c8e/manager/0.log" Jan 24 01:06:40 crc kubenswrapper[4676]: I0124 01:06:40.034407 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-js6db_a9f1e2a4-c9fa-4136-aa76-059dc2ed9c85/manager/0.log" Jan 24 01:06:40 crc kubenswrapper[4676]: I0124 01:06:40.187873 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-btxnv_29e4b64d-19bd-419b-9e29-7a41e6f12ae0/manager/0.log" Jan 24 01:06:40 crc kubenswrapper[4676]: I0124 01:06:40.319454 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-p2nr8_5e9cf1cb-c413-45ad-8a51-bf35407fcdfe/manager/0.log" Jan 24 01:06:40 crc kubenswrapper[4676]: I0124 01:06:40.446699 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-rbzsj_6b8541f9-a37a-41d6-8006-3d0335c3abb5/manager/0.log" Jan 24 01:06:40 crc kubenswrapper[4676]: I0124 01:06:40.745019 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-jxx26_921e121c-5261-4fe7-8171-6b634babedf4/manager/0.log" Jan 24 01:06:41 crc kubenswrapper[4676]: I0124 01:06:41.000844 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-v25h4_555ebb8f-1bc3-4b8d-9f37-cad92b48477c/manager/0.log" Jan 24 01:06:41 crc kubenswrapper[4676]: I0124 01:06:41.198782 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-2gzqg_9a3f9a14-1138-425d-8a56-454b282d7d9f/manager/0.log" Jan 24 01:06:41 crc kubenswrapper[4676]: I0124 01:06:41.277465 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-g4d8z_dfc79179-d245-4360-be6e-8b43441e23ed/manager/0.log" Jan 24 01:06:41 crc kubenswrapper[4676]: I0124 01:06:41.482691 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-rpn8n_4ce661f6-26e2-4da2-a759-e493a60587b2/manager/0.log" Jan 24 01:06:41 crc kubenswrapper[4676]: I0124 01:06:41.609343 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-wqbcz_dd6346d8-9cf1-4364-b480-f4c2d872472f/manager/0.log" Jan 24 01:06:41 crc kubenswrapper[4676]: I0124 01:06:41.725618 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-4xp45_df9ab5f0-f577-4303-8045-f960c67a6936/manager/0.log" Jan 24 01:06:41 crc kubenswrapper[4676]: I0124 01:06:41.851322 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-k8lw7_ced74bcb-8345-40c5-b2d4-3d369f30b835/manager/0.log" Jan 24 01:06:42 crc kubenswrapper[4676]: I0124 01:06:42.002245 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549w5ms_196f45b9-e656-4760-b058-e0b5c08a50d9/manager/0.log" Jan 24 01:06:42 crc kubenswrapper[4676]: I0124 01:06:42.239154 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-69647cdbc5-96fbr_07dc00eb-bfcb-4d0d-bd6a-9e4b52e3e7f6/operator/0.log" Jan 24 01:06:42 crc kubenswrapper[4676]: I0124 01:06:42.449138 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-kw9l8_25146f99-405a-4473-bf27-69a7195a3338/registry-server/0.log" Jan 24 01:06:42 crc kubenswrapper[4676]: I0124 01:06:42.838352 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-tdqwh_d85fa79d-818f-4079-aac4-f3fa51a90e9a/manager/0.log" Jan 24 01:06:42 crc kubenswrapper[4676]: I0124 01:06:42.918281 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-49nrq_53358678-d763-4b02-a157-86a57ebd0305/manager/0.log" Jan 24 01:06:43 crc kubenswrapper[4676]: I0124 01:06:43.252143 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d5f8c4f48-md6zs_9d79c791-c851-4c4a-aa2d-d175b668b0f5/manager/0.log" Jan 24 01:06:43 crc kubenswrapper[4676]: I0124 01:06:43.257759 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-h44qw_fce4c8d0-b903-4873-8c89-2f4b9dd9c05d/operator/0.log" Jan 24 01:06:43 crc kubenswrapper[4676]: I0124 01:06:43.342197 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5df95d5965-h8wx9_b0c8972b-31d7-40c1-bc65-1478718d41a5/manager/0.log" Jan 24 01:06:43 crc kubenswrapper[4676]: I0124 01:06:43.507136 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4c5zl_060e1c8d-dfa6-428f-bffe-d89ac3dab8c3/manager/0.log" Jan 24 01:06:43 crc kubenswrapper[4676]: I0124 01:06:43.593513 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-gqg82_6245b73e-9fba-4ad7-bbbc-31db48c03825/manager/0.log" Jan 24 01:06:43 crc kubenswrapper[4676]: I0124 01:06:43.750672 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-h6hzt_ccb1ff12-bef7-4f23-b084-fae32f8202ac/manager/0.log" Jan 24 01:06:51 crc kubenswrapper[4676]: I0124 01:06:51.257120 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:06:51 crc kubenswrapper[4676]: E0124 01:06:51.259051 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:07:04 crc kubenswrapper[4676]: I0124 01:07:04.928209 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mkwj8_f53afe6a-307c-4b0d-88cb-596703f35f8a/control-plane-machine-set-operator/0.log" Jan 24 01:07:05 crc kubenswrapper[4676]: I0124 01:07:05.086321 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jmv6d_9c51c973-c370-41e8-b167-25d3b11418bf/kube-rbac-proxy/0.log" Jan 24 01:07:05 crc kubenswrapper[4676]: I0124 01:07:05.125348 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jmv6d_9c51c973-c370-41e8-b167-25d3b11418bf/machine-api-operator/0.log" Jan 24 01:07:06 crc kubenswrapper[4676]: I0124 01:07:06.271549 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:07:06 crc kubenswrapper[4676]: E0124 01:07:06.272020 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:07:18 crc kubenswrapper[4676]: I0124 01:07:18.805019 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-jws6v_1b51f181-5a85-4c07-b259-f67d17bf1134/cert-manager-controller/0.log" Jan 24 01:07:18 crc kubenswrapper[4676]: I0124 01:07:18.921410 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-s27fj_6db681a1-6165-4404-81a9-a189e9b30bfd/cert-manager-cainjector/0.log" Jan 24 01:07:19 crc kubenswrapper[4676]: I0124 01:07:19.002956 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-w7kb7_8f43788b-d559-4623-ae87-a820a2f23b08/cert-manager-webhook/0.log" Jan 24 01:07:21 crc kubenswrapper[4676]: I0124 01:07:21.256860 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:07:21 crc kubenswrapper[4676]: E0124 01:07:21.257497 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:07:32 crc kubenswrapper[4676]: I0124 01:07:32.345604 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-95988_4db34b09-85e6-435d-b991-c2513eec5d17/nmstate-console-plugin/0.log" Jan 24 01:07:32 crc kubenswrapper[4676]: I0124 01:07:32.494776 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lkqsz_6bdbc24e-ee52-41f0-9aaf-1091ac803c27/nmstate-handler/0.log" Jan 24 01:07:32 crc kubenswrapper[4676]: I0124 01:07:32.570717 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rv2q7_0c99caac-d9eb-494c-bd04-c18dbc8a0844/kube-rbac-proxy/0.log" Jan 24 01:07:32 crc kubenswrapper[4676]: I0124 01:07:32.621553 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rv2q7_0c99caac-d9eb-494c-bd04-c18dbc8a0844/nmstate-metrics/0.log" Jan 24 01:07:32 crc kubenswrapper[4676]: I0124 01:07:32.809412 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jsbxx_efdaf1c0-0096-4f21-a0e2-2fc6f6e04de2/nmstate-operator/0.log" Jan 24 01:07:32 crc kubenswrapper[4676]: I0124 01:07:32.852051 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-cpbrv_badb3470-b60b-44ca-8d9e-52191ea016fa/nmstate-webhook/0.log" Jan 24 01:07:35 crc kubenswrapper[4676]: I0124 01:07:35.255749 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:07:35 crc kubenswrapper[4676]: E0124 01:07:35.256308 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:07:50 crc kubenswrapper[4676]: I0124 01:07:50.255616 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:07:50 crc kubenswrapper[4676]: E0124 01:07:50.256313 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.217033 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jqk4f_f5517856-fac7-4312-ab46-86bbd5c1282d/controller/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.245003 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jqk4f_f5517856-fac7-4312-ab46-86bbd5c1282d/kube-rbac-proxy/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.434782 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-frr-files/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.694780 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-reloader/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.727157 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-reloader/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.744151 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-metrics/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.752408 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-frr-files/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.883271 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-frr-files/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.888764 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-reloader/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.944922 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-metrics/0.log" Jan 24 01:08:00 crc kubenswrapper[4676]: I0124 01:08:00.987146 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-metrics/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.150359 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-frr-files/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.159359 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-metrics/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.187565 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/cp-reloader/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.191870 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/controller/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.442057 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/frr-metrics/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.450736 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/kube-rbac-proxy/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.475534 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/kube-rbac-proxy-frr/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.682050 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/reloader/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.758813 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-s7b59_2ce22d83-ee4f-4ad7-8882-b876d4ed52a2/frr-k8s-webhook-server/0.log" Jan 24 01:08:01 crc kubenswrapper[4676]: I0124 01:08:01.971739 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6868db74d6-qfhwb_59200cad-bd1a-472a-a1a1-adccb5211b21/manager/0.log" Jan 24 01:08:02 crc kubenswrapper[4676]: I0124 01:08:02.213181 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6dbffff8f5-cb5l5_adfe0dac-5ac5-44b8-97db-088d1ac83d34/webhook-server/0.log" Jan 24 01:08:02 crc kubenswrapper[4676]: I0124 01:08:02.285358 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zh6td_7da3019e-01de-4671-a78f-6c0d2e57fde3/kube-rbac-proxy/0.log" Jan 24 01:08:02 crc kubenswrapper[4676]: I0124 01:08:02.390238 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-szpqt_8a7798b0-f97f-4804-a406-22ecc8b45677/frr/0.log" Jan 24 01:08:02 crc kubenswrapper[4676]: I0124 01:08:02.681587 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zh6td_7da3019e-01de-4671-a78f-6c0d2e57fde3/speaker/0.log" Jan 24 01:08:04 crc kubenswrapper[4676]: I0124 01:08:04.256780 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:08:04 crc kubenswrapper[4676]: E0124 01:08:04.257801 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:08:16 crc kubenswrapper[4676]: I0124 01:08:16.267878 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:08:16 crc kubenswrapper[4676]: E0124 01:08:16.269022 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:08:18 crc kubenswrapper[4676]: I0124 01:08:18.481399 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/util/0.log" Jan 24 01:08:18 crc kubenswrapper[4676]: I0124 01:08:18.679144 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/pull/0.log" Jan 24 01:08:18 crc kubenswrapper[4676]: I0124 01:08:18.743853 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/pull/0.log" Jan 24 01:08:18 crc kubenswrapper[4676]: I0124 01:08:18.756827 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/util/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.017993 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/pull/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.029732 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/util/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.066117 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrxg2j_3c111688-154b-47fa-8f89-6e48007b1fec/extract/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.217827 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/util/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.419178 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/pull/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.449015 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/pull/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.465647 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/util/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.656737 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/pull/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.748320 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/util/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.787231 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rcsfv_79811b63-a3e6-47b0-8041-247b1536ba50/extract/0.log" Jan 24 01:08:19 crc kubenswrapper[4676]: I0124 01:08:19.944237 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-utilities/0.log" Jan 24 01:08:20 crc kubenswrapper[4676]: I0124 01:08:20.275563 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-content/0.log" Jan 24 01:08:20 crc kubenswrapper[4676]: I0124 01:08:20.322243 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-utilities/0.log" Jan 24 01:08:20 crc kubenswrapper[4676]: I0124 01:08:20.339887 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-content/0.log" Jan 24 01:08:20 crc kubenswrapper[4676]: I0124 01:08:20.637922 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-utilities/0.log" Jan 24 01:08:20 crc kubenswrapper[4676]: I0124 01:08:20.684267 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/extract-content/0.log" Jan 24 01:08:20 crc kubenswrapper[4676]: I0124 01:08:20.812590 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vsfsj_34862c1d-2d18-42f8-9ef7-71d349c019fd/registry-server/0.log" Jan 24 01:08:20 crc kubenswrapper[4676]: I0124 01:08:20.912837 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-utilities/0.log" Jan 24 01:08:21 crc kubenswrapper[4676]: I0124 01:08:21.397080 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-utilities/0.log" Jan 24 01:08:21 crc kubenswrapper[4676]: I0124 01:08:21.415527 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-content/0.log" Jan 24 01:08:21 crc kubenswrapper[4676]: I0124 01:08:21.458490 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-content/0.log" Jan 24 01:08:21 crc kubenswrapper[4676]: I0124 01:08:21.656299 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-utilities/0.log" Jan 24 01:08:21 crc kubenswrapper[4676]: I0124 01:08:21.664048 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/extract-content/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.058285 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-utilities/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.092017 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rjjnx_646456dc-35bc-4df2-8f92-55cdfefc6010/registry-server/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.113682 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kwptt_8db31db6-2c7c-4688-89c7-328024cd7003/marketplace-operator/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.354013 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-utilities/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.385593 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-content/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.429210 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-content/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.546034 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-utilities/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.689704 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/extract-content/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.756467 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw7tg_4c82da49-780b-431e-bfe7-d52ce3bcb623/registry-server/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.859866 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-utilities/0.log" Jan 24 01:08:22 crc kubenswrapper[4676]: I0124 01:08:22.993751 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-content/0.log" Jan 24 01:08:23 crc kubenswrapper[4676]: I0124 01:08:23.037506 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-utilities/0.log" Jan 24 01:08:23 crc kubenswrapper[4676]: I0124 01:08:23.072127 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-content/0.log" Jan 24 01:08:23 crc kubenswrapper[4676]: I0124 01:08:23.198126 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-content/0.log" Jan 24 01:08:23 crc kubenswrapper[4676]: I0124 01:08:23.243675 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/extract-utilities/0.log" Jan 24 01:08:23 crc kubenswrapper[4676]: I0124 01:08:23.624231 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6mz4x_829774a4-dd19-462e-829c-f201bddf6886/registry-server/0.log" Jan 24 01:08:27 crc kubenswrapper[4676]: I0124 01:08:27.255202 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:08:27 crc kubenswrapper[4676]: E0124 01:08:27.255971 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:08:39 crc kubenswrapper[4676]: I0124 01:08:39.255904 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:08:39 crc kubenswrapper[4676]: E0124 01:08:39.256750 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:08:53 crc kubenswrapper[4676]: I0124 01:08:53.255705 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:08:53 crc kubenswrapper[4676]: E0124 01:08:53.256621 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.274980 4676 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2nrdw"] Jan 24 01:08:58 crc kubenswrapper[4676]: E0124 01:08:58.275754 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerName="extract-content" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.275766 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerName="extract-content" Jan 24 01:08:58 crc kubenswrapper[4676]: E0124 01:08:58.275798 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerName="registry-server" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.275804 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerName="registry-server" Jan 24 01:08:58 crc kubenswrapper[4676]: E0124 01:08:58.275819 4676 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerName="extract-utilities" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.275825 4676 state_mem.go:107] "Deleted CPUSet assignment" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerName="extract-utilities" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.275973 4676 memory_manager.go:354] "RemoveStaleState removing state" podUID="218ff773-95bb-4b63-a4d5-b7fb752b5870" containerName="registry-server" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.277141 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.297212 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nrdw"] Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.374418 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4b7\" (UniqueName: \"kubernetes.io/projected/3b5bdb57-1227-4549-a407-7768a82ff571-kube-api-access-gf4b7\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.374493 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-utilities\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.374513 4676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-catalog-content\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.477213 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf4b7\" (UniqueName: \"kubernetes.io/projected/3b5bdb57-1227-4549-a407-7768a82ff571-kube-api-access-gf4b7\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.477521 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-utilities\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.477542 4676 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-catalog-content\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.478020 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-catalog-content\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.478080 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-utilities\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.499695 4676 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf4b7\" (UniqueName: \"kubernetes.io/projected/3b5bdb57-1227-4549-a407-7768a82ff571-kube-api-access-gf4b7\") pod \"redhat-operators-2nrdw\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:58 crc kubenswrapper[4676]: I0124 01:08:58.603183 4676 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:08:59 crc kubenswrapper[4676]: I0124 01:08:59.112592 4676 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nrdw"] Jan 24 01:08:59 crc kubenswrapper[4676]: I0124 01:08:59.856541 4676 generic.go:334] "Generic (PLEG): container finished" podID="3b5bdb57-1227-4549-a407-7768a82ff571" containerID="4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b" exitCode=0 Jan 24 01:08:59 crc kubenswrapper[4676]: I0124 01:08:59.856650 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nrdw" event={"ID":"3b5bdb57-1227-4549-a407-7768a82ff571","Type":"ContainerDied","Data":"4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b"} Jan 24 01:08:59 crc kubenswrapper[4676]: I0124 01:08:59.857004 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nrdw" event={"ID":"3b5bdb57-1227-4549-a407-7768a82ff571","Type":"ContainerStarted","Data":"d2b9c41fdecc2fd86c961eaa1c90432f775d4cec810f3d698cf239294c2169aa"} Jan 24 01:09:00 crc kubenswrapper[4676]: I0124 01:09:00.871721 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nrdw" event={"ID":"3b5bdb57-1227-4549-a407-7768a82ff571","Type":"ContainerStarted","Data":"706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee"} Jan 24 01:09:04 crc kubenswrapper[4676]: I0124 01:09:04.903923 4676 generic.go:334] "Generic (PLEG): container finished" podID="3b5bdb57-1227-4549-a407-7768a82ff571" containerID="706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee" exitCode=0 Jan 24 01:09:04 crc kubenswrapper[4676]: I0124 01:09:04.904005 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nrdw" event={"ID":"3b5bdb57-1227-4549-a407-7768a82ff571","Type":"ContainerDied","Data":"706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee"} Jan 24 01:09:05 crc kubenswrapper[4676]: I0124 01:09:05.914297 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nrdw" event={"ID":"3b5bdb57-1227-4549-a407-7768a82ff571","Type":"ContainerStarted","Data":"93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e"} Jan 24 01:09:05 crc kubenswrapper[4676]: I0124 01:09:05.937226 4676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2nrdw" podStartSLOduration=2.355179058 podStartE2EDuration="7.93720857s" podCreationTimestamp="2026-01-24 01:08:58 +0000 UTC" firstStartedPulling="2026-01-24 01:08:59.859035085 +0000 UTC m=+3923.889006086" lastFinishedPulling="2026-01-24 01:09:05.441064597 +0000 UTC m=+3929.471035598" observedRunningTime="2026-01-24 01:09:05.930787054 +0000 UTC m=+3929.960758095" watchObservedRunningTime="2026-01-24 01:09:05.93720857 +0000 UTC m=+3929.967179571" Jan 24 01:09:06 crc kubenswrapper[4676]: I0124 01:09:06.279791 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:09:06 crc kubenswrapper[4676]: E0124 01:09:06.280462 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:09:08 crc kubenswrapper[4676]: I0124 01:09:08.603303 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:09:08 crc kubenswrapper[4676]: I0124 01:09:08.603563 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:09:09 crc kubenswrapper[4676]: I0124 01:09:09.672277 4676 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2nrdw" podUID="3b5bdb57-1227-4549-a407-7768a82ff571" containerName="registry-server" probeResult="failure" output=< Jan 24 01:09:09 crc kubenswrapper[4676]: timeout: failed to connect service ":50051" within 1s Jan 24 01:09:09 crc kubenswrapper[4676]: > Jan 24 01:09:17 crc kubenswrapper[4676]: I0124 01:09:17.255811 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:09:17 crc kubenswrapper[4676]: E0124 01:09:17.257072 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:09:18 crc kubenswrapper[4676]: I0124 01:09:18.694161 4676 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:09:18 crc kubenswrapper[4676]: I0124 01:09:18.747463 4676 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:09:18 crc kubenswrapper[4676]: I0124 01:09:18.939604 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nrdw"] Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.043914 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2nrdw" podUID="3b5bdb57-1227-4549-a407-7768a82ff571" containerName="registry-server" containerID="cri-o://93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e" gracePeriod=2 Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.549787 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.613047 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf4b7\" (UniqueName: \"kubernetes.io/projected/3b5bdb57-1227-4549-a407-7768a82ff571-kube-api-access-gf4b7\") pod \"3b5bdb57-1227-4549-a407-7768a82ff571\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.613090 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-catalog-content\") pod \"3b5bdb57-1227-4549-a407-7768a82ff571\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.613217 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-utilities\") pod \"3b5bdb57-1227-4549-a407-7768a82ff571\" (UID: \"3b5bdb57-1227-4549-a407-7768a82ff571\") " Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.614407 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-utilities" (OuterVolumeSpecName: "utilities") pod "3b5bdb57-1227-4549-a407-7768a82ff571" (UID: "3b5bdb57-1227-4549-a407-7768a82ff571"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.614663 4676 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.627269 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5bdb57-1227-4549-a407-7768a82ff571-kube-api-access-gf4b7" (OuterVolumeSpecName: "kube-api-access-gf4b7") pod "3b5bdb57-1227-4549-a407-7768a82ff571" (UID: "3b5bdb57-1227-4549-a407-7768a82ff571"). InnerVolumeSpecName "kube-api-access-gf4b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.717003 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf4b7\" (UniqueName: \"kubernetes.io/projected/3b5bdb57-1227-4549-a407-7768a82ff571-kube-api-access-gf4b7\") on node \"crc\" DevicePath \"\"" Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.738388 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b5bdb57-1227-4549-a407-7768a82ff571" (UID: "3b5bdb57-1227-4549-a407-7768a82ff571"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:09:20 crc kubenswrapper[4676]: I0124 01:09:20.818554 4676 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b5bdb57-1227-4549-a407-7768a82ff571-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.058684 4676 generic.go:334] "Generic (PLEG): container finished" podID="3b5bdb57-1227-4549-a407-7768a82ff571" containerID="93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e" exitCode=0 Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.058766 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nrdw" event={"ID":"3b5bdb57-1227-4549-a407-7768a82ff571","Type":"ContainerDied","Data":"93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e"} Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.058791 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nrdw" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.058809 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nrdw" event={"ID":"3b5bdb57-1227-4549-a407-7768a82ff571","Type":"ContainerDied","Data":"d2b9c41fdecc2fd86c961eaa1c90432f775d4cec810f3d698cf239294c2169aa"} Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.058844 4676 scope.go:117] "RemoveContainer" containerID="93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.090214 4676 scope.go:117] "RemoveContainer" containerID="706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.125071 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nrdw"] Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.125897 4676 scope.go:117] "RemoveContainer" containerID="4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.140129 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2nrdw"] Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.164441 4676 scope.go:117] "RemoveContainer" containerID="93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e" Jan 24 01:09:21 crc kubenswrapper[4676]: E0124 01:09:21.164948 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e\": container with ID starting with 93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e not found: ID does not exist" containerID="93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.164998 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e"} err="failed to get container status \"93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e\": rpc error: code = NotFound desc = could not find container \"93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e\": container with ID starting with 93c3dc5d89006d93c816256ff7e52391b95edbe86e4b6bff514031d0a377503e not found: ID does not exist" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.165031 4676 scope.go:117] "RemoveContainer" containerID="706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee" Jan 24 01:09:21 crc kubenswrapper[4676]: E0124 01:09:21.165821 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee\": container with ID starting with 706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee not found: ID does not exist" containerID="706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.165952 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee"} err="failed to get container status \"706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee\": rpc error: code = NotFound desc = could not find container \"706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee\": container with ID starting with 706f2526bdcdb0475f55e82716fff7b99c6f21ba4ce5453d4af3afdd341ddbee not found: ID does not exist" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.166058 4676 scope.go:117] "RemoveContainer" containerID="4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b" Jan 24 01:09:21 crc kubenswrapper[4676]: E0124 01:09:21.166470 4676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b\": container with ID starting with 4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b not found: ID does not exist" containerID="4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b" Jan 24 01:09:21 crc kubenswrapper[4676]: I0124 01:09:21.166508 4676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b"} err="failed to get container status \"4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b\": rpc error: code = NotFound desc = could not find container \"4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b\": container with ID starting with 4cbfe82eff6d462ba3148ce02c0da1c0abc3bda8a2446df0a50c15b4fa35d06b not found: ID does not exist" Jan 24 01:09:22 crc kubenswrapper[4676]: I0124 01:09:22.267196 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5bdb57-1227-4549-a407-7768a82ff571" path="/var/lib/kubelet/pods/3b5bdb57-1227-4549-a407-7768a82ff571/volumes" Jan 24 01:09:29 crc kubenswrapper[4676]: I0124 01:09:29.259120 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:09:29 crc kubenswrapper[4676]: E0124 01:09:29.259788 4676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mzrz_openshift-machine-config-operator(bd647b0d-6d3d-432d-81ac-6484a2948211)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" Jan 24 01:09:40 crc kubenswrapper[4676]: I0124 01:09:40.257728 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:09:41 crc kubenswrapper[4676]: I0124 01:09:41.340868 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"edd2934dbaa930bc821862a558e8c2c3faf93460442893d12bfeffbb6ed69162"} Jan 24 01:10:18 crc kubenswrapper[4676]: I0124 01:10:18.784938 4676 generic.go:334] "Generic (PLEG): container finished" podID="ce7f9d18-7eec-43f4-908a-d65d210492e7" containerID="d0b712dc713fcee1a9dcfc5340b4f493d3bd6dca5b50bbf686427d3022f3f332" exitCode=0 Jan 24 01:10:18 crc kubenswrapper[4676]: I0124 01:10:18.785058 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgb89/must-gather-rzdhh" event={"ID":"ce7f9d18-7eec-43f4-908a-d65d210492e7","Type":"ContainerDied","Data":"d0b712dc713fcee1a9dcfc5340b4f493d3bd6dca5b50bbf686427d3022f3f332"} Jan 24 01:10:18 crc kubenswrapper[4676]: I0124 01:10:18.786489 4676 scope.go:117] "RemoveContainer" containerID="d0b712dc713fcee1a9dcfc5340b4f493d3bd6dca5b50bbf686427d3022f3f332" Jan 24 01:10:19 crc kubenswrapper[4676]: I0124 01:10:19.171841 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pgb89_must-gather-rzdhh_ce7f9d18-7eec-43f4-908a-d65d210492e7/gather/0.log" Jan 24 01:10:30 crc kubenswrapper[4676]: I0124 01:10:30.755767 4676 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pgb89/must-gather-rzdhh"] Jan 24 01:10:30 crc kubenswrapper[4676]: I0124 01:10:30.756606 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pgb89/must-gather-rzdhh" podUID="ce7f9d18-7eec-43f4-908a-d65d210492e7" containerName="copy" containerID="cri-o://ebdf94432be40adc9409ec2d7d078306d66cbfc22f6dc35636a6fecb28c07f9e" gracePeriod=2 Jan 24 01:10:30 crc kubenswrapper[4676]: I0124 01:10:30.769066 4676 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pgb89/must-gather-rzdhh"] Jan 24 01:10:30 crc kubenswrapper[4676]: I0124 01:10:30.905741 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pgb89_must-gather-rzdhh_ce7f9d18-7eec-43f4-908a-d65d210492e7/copy/0.log" Jan 24 01:10:30 crc kubenswrapper[4676]: I0124 01:10:30.906442 4676 generic.go:334] "Generic (PLEG): container finished" podID="ce7f9d18-7eec-43f4-908a-d65d210492e7" containerID="ebdf94432be40adc9409ec2d7d078306d66cbfc22f6dc35636a6fecb28c07f9e" exitCode=143 Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.256177 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pgb89_must-gather-rzdhh_ce7f9d18-7eec-43f4-908a-d65d210492e7/copy/0.log" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.256958 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.405259 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2kcb\" (UniqueName: \"kubernetes.io/projected/ce7f9d18-7eec-43f4-908a-d65d210492e7-kube-api-access-z2kcb\") pod \"ce7f9d18-7eec-43f4-908a-d65d210492e7\" (UID: \"ce7f9d18-7eec-43f4-908a-d65d210492e7\") " Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.405497 4676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ce7f9d18-7eec-43f4-908a-d65d210492e7-must-gather-output\") pod \"ce7f9d18-7eec-43f4-908a-d65d210492e7\" (UID: \"ce7f9d18-7eec-43f4-908a-d65d210492e7\") " Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.411555 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7f9d18-7eec-43f4-908a-d65d210492e7-kube-api-access-z2kcb" (OuterVolumeSpecName: "kube-api-access-z2kcb") pod "ce7f9d18-7eec-43f4-908a-d65d210492e7" (UID: "ce7f9d18-7eec-43f4-908a-d65d210492e7"). InnerVolumeSpecName "kube-api-access-z2kcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.507958 4676 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2kcb\" (UniqueName: \"kubernetes.io/projected/ce7f9d18-7eec-43f4-908a-d65d210492e7-kube-api-access-z2kcb\") on node \"crc\" DevicePath \"\"" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.540330 4676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7f9d18-7eec-43f4-908a-d65d210492e7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ce7f9d18-7eec-43f4-908a-d65d210492e7" (UID: "ce7f9d18-7eec-43f4-908a-d65d210492e7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.609359 4676 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ce7f9d18-7eec-43f4-908a-d65d210492e7-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.916017 4676 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pgb89_must-gather-rzdhh_ce7f9d18-7eec-43f4-908a-d65d210492e7/copy/0.log" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.916604 4676 scope.go:117] "RemoveContainer" containerID="ebdf94432be40adc9409ec2d7d078306d66cbfc22f6dc35636a6fecb28c07f9e" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.916719 4676 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgb89/must-gather-rzdhh" Jan 24 01:10:31 crc kubenswrapper[4676]: I0124 01:10:31.940723 4676 scope.go:117] "RemoveContainer" containerID="d0b712dc713fcee1a9dcfc5340b4f493d3bd6dca5b50bbf686427d3022f3f332" Jan 24 01:10:32 crc kubenswrapper[4676]: I0124 01:10:32.277616 4676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7f9d18-7eec-43f4-908a-d65d210492e7" path="/var/lib/kubelet/pods/ce7f9d18-7eec-43f4-908a-d65d210492e7/volumes" Jan 24 01:11:43 crc kubenswrapper[4676]: I0124 01:11:43.845657 4676 scope.go:117] "RemoveContainer" containerID="6e1d712a5b60b5fed2dd683d40f7c0b75205c48776013f310795ca214bdf4a77" Jan 24 01:12:09 crc kubenswrapper[4676]: I0124 01:12:09.363927 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:12:09 crc kubenswrapper[4676]: I0124 01:12:09.364636 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:12:39 crc kubenswrapper[4676]: I0124 01:12:39.364779 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:12:39 crc kubenswrapper[4676]: I0124 01:12:39.365321 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:13:09 crc kubenswrapper[4676]: I0124 01:13:09.364313 4676 patch_prober.go:28] interesting pod/machine-config-daemon-7mzrz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 01:13:09 crc kubenswrapper[4676]: I0124 01:13:09.365055 4676 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 01:13:09 crc kubenswrapper[4676]: I0124 01:13:09.365131 4676 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" Jan 24 01:13:09 crc kubenswrapper[4676]: I0124 01:13:09.366068 4676 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edd2934dbaa930bc821862a558e8c2c3faf93460442893d12bfeffbb6ed69162"} pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 01:13:09 crc kubenswrapper[4676]: I0124 01:13:09.366134 4676 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" podUID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerName="machine-config-daemon" containerID="cri-o://edd2934dbaa930bc821862a558e8c2c3faf93460442893d12bfeffbb6ed69162" gracePeriod=600 Jan 24 01:13:09 crc kubenswrapper[4676]: I0124 01:13:09.626998 4676 generic.go:334] "Generic (PLEG): container finished" podID="bd647b0d-6d3d-432d-81ac-6484a2948211" containerID="edd2934dbaa930bc821862a558e8c2c3faf93460442893d12bfeffbb6ed69162" exitCode=0 Jan 24 01:13:09 crc kubenswrapper[4676]: I0124 01:13:09.627040 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerDied","Data":"edd2934dbaa930bc821862a558e8c2c3faf93460442893d12bfeffbb6ed69162"} Jan 24 01:13:09 crc kubenswrapper[4676]: I0124 01:13:09.627073 4676 scope.go:117] "RemoveContainer" containerID="6b709e3553e881b387af6667b2735e4c905d3b5ba75799e955a72575129b29d9" Jan 24 01:13:10 crc kubenswrapper[4676]: I0124 01:13:10.643418 4676 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mzrz" event={"ID":"bd647b0d-6d3d-432d-81ac-6484a2948211","Type":"ContainerStarted","Data":"64412b3d11bc4a5ba20e991951bf85349b15c728095f990ca935e7e9fc752a77"}